Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

It may be that the most important difference in our approach is not the particular formalisms that we arrive at, but more the kinds of methods we would prefer to reach them by. We've surveyed a few other solutions to the problem of managing state and behaviour, and before we reject some more of them, it might be useful to try to characterise some more aspects of the mentality they spring from. In Computer Science, and the wider philosophical tradition it stems from, there is a high value placed on constructing consistent formalisms that it is easy to prove theorems about. This consistency and formal tractability often ends up being privileged to the detriment of ensuring that the formalisms are relevant to any particular purpose. Thomas Kuhn, in The Structure of Scientific Revolutions [2], divides scientific progress into two kinds of phases, the first which . The first of these he labels "Normal Science", which is a period of puzzle-solving, where known intellectual rules are used to push around problems much in the scheme of solving a crossword puzzle, and the second which . The second phase he labels "Changes in World View" which he explains are "not fully reducible to a reinterpretation of individual and stable data". Our manifest failure as a profession to arrive at consistent engineering standards which allow us to rapidly and successfully solve new problems for our users should convince us that we are not currently in a position of "normal science" - therefore we should treat with suspicion techniques which are selected for the efficacy in solving problems in this that regime. Theorem proving may well come later, but we should first accept as a profession that we still simply have no idea what we are doing.

In order to understand what we are doing, and find out which abstractions are appropriate for the tasks we are interested in, we have no alternative but to try to study ourselves as we are about the work of solving real problems. This can't be done as an "armchair exercise" - we must have to have real work to do, and we must also have to have the spare capacity in order to observe ourselves over an extended period, and evaluate the success or failure of different kinds of primitives and abstractions in successfully and economically explaining the kinds of work we find ourselves doing. For different reasons, this process is very difficult to conduct in either industry or academia as they are constituted today, but with a lot of patience, it can be slowly conducted at least somewhere.

...

It's time to take on one of the the most important formalisms underlying type theory as it is most broadly practiced by software engineers today. The Liskov Substitution Principle [3] is summarised at Wikipedia as follows - "If S is a subtype of T, then objects of type T may be replaced with objects of type S, without altering any of the desirable properties of that program (correctness, task performed, etc.)" (italics ours). On the face of it this definition appears so absurdly broad and self-defeating that one has to imagine that it must have been misquoted. Going However, going to the original paper one discovers this wording: "the objects of the subtype ought to behave the same as those of the supertype as far as anyone or any program using supertype objects can tell" (again italics ours). This original formulation is even more incoherent and self-defeating - if the objects behaved the same "as far as anyone can tell" what would it mean to say they were different at all - and what purpose would there be in even having a subtype? Without skipping a beat, the paper (and the summary) immediately dive The paper then quickly dives into formalisms such as "An assignment x: T:= E is legal provided the type of the expression E is a subtype of the declared type T of variable x" or else "Let q(x) be a property provable about objects x of type T, Then q(y) should be provable for objects y of type S where S is a subtype of T". All this occurs without a good attempt to motivate what kinds of purposes this definition is meant to be good for - the "statement of intent" is so grandiose as to make the mere existence of subtypes completely impossible.

...

It's clear that the LSP is only capable of checking the most harmless and unilluminating properties of implementations without forcing the reader to go to the source code looking for inspiration about the kinds of "properties" he is interested in proving about them. Our aim, though, as we explained above, is to promote the implementor's power of "advertisement" about their intention, especially where those relate to the handling of updates to state. This is precisely at odds with the intent of the LSP - and indeed furthermore those of both functional and object-oriented programming as a whole - which have as a core motivation promoting the developer's ability to hide the nature and location of updates to state, rather than to advertise them. This is, therefore, one of many reasons that we abandon the affordances of the LSP in our system, and this becomes one of the crucial answers to our headline question of "Why are Grades not Types?" - because in fact we explicitly abandon the crucial definitional crux of what a type is to most practicing communities. We do not say, given that grade S is a parent grade of grade T, that we make any guarantees whatever on the behaviour of T instances based on this relationship. And in fact, another of our core principles:

...

It should be a significant warning sign that "substitutability" has run into such fundamental problems when dealing what should be classically simple situations such as the Circle-Ellipse Problem which should be a trivial application of the theory to a ready made distinction taken directly from basic mathematics. As discussion of this problem quickly reveals, distinctions based on shared properties (or - shared descriptive state) can very easily give completely different answers to those based on common shared function - and as Lakoff points out, there is no good reason why they shouldn't. It's interesting that many of the ways out of this conundrum point to the requirement for allowing an object to change "type" over its lifetime. Our system does not permit this, but the solution is related. We avoid such problems in two ways - firstly, by not making any type-based guarantees of function, and secondly - although we hold object instances to be immutable (minus models) from the very beginning point of their construction - because all referential power is mediated by the framework, we very easily allow one object to substitute for another in its architectural function by having the original object destroyed and a fresh one with a different "type" instantiated in its place. No observer is disturbed by the change since every observer is properly a pure function.

...