## Archive for January, 2012

### Formalism versus Object Thinking

Sunday, January 22nd, 2012

The book Object Thinking makes a very good explanation of the divide between two different, antagonistic modes of thinking. The formalist tradition (in software) values logic and mathematics as design tools. A formalist thinks that documents have objective, intrinsic meaning. Think “design documents”; think “specifications”.

The empirical tradition (the book calls it hermeneutics, but I will stick to this less formal name :-) values experimentation. Empiricists hold that the meaning of a document is a shared, temporary convention between the author and the readers. Think “user story”; think CRC cards; think “quick design session on the whiteboard.”

The empiriricists brought us Lisp; the formalists brought us Haskell. The formalists brought us Algol, Pascal, Ada. The empiricists brought us C, Perl, Smalltalk.

Empiricists like to explain things with anthropomorphism: “this object knows this and wants to talk to that other object…” The formalists detest anthropomorphism; see these quotes from Dijkstra.

As a former minor student of the best formalist tradition there is, and a current student of the Object Thinking tradition, I think I’m qualified to comment. Please don’t take my notes as meaning that the formalist tradition sucks; I certainly don’t think this. I’m interested in highlighting differences. I think a good developer should learn from both schools.

Formalists aim to bring clarity of thought by leveraging mathematical thinking.

Object thinking aims to bring clarity of thought by leveraging spatial reasoning, metaphor, intuition, and other modes of thinking.

It is well known that mathematical thinking is powerful. It’s also more difficult to learn and use. One example that was a favourite of Dijkstra is the problem of covering a chessboard with dominoes when the opposite corners of the chessboards were removed. If we try to prove that it’s impossible by “trying” to do it or simulating it, we’d quickly get bogged down. On the other hand, there’s a very simple and nice proof that shows that it’s impossible. Once you get that idea, you have power :-)

An even more striking example is in this note from Dijkstra on the proof method called “pigeonhole principle”. Dijkstra finds that the name “pigeon-hole principle” is unfortunate, as is the idea to imagine “holes” and a process of filling them with “pigeons” until you find that some pigeon has no hole. The process is vivid and easy to understand; yet it is limiting. Dijkstra shows in this note how to define the principle in a more simple and powerful way:

For a non-empty, finite bag of numbers, the maximum value is at least the average value.

This formulation is simple (but not easy!) Armed with this formulation, Dijkstra explains how he used this principle to solve on the spot a combinatorial problem about Totocalcio that a collegue of his could not solve with pen and paper. He also explains how he used it to solve a generalization of the problem, which would not be easy to prove with the “object-oriented” version of the principle.

I think this note presents the contrast between formalism and empiricism vividly. If you put in the effort to internalize the formal tool, that which was difficult becomes easy, and you can solve a whole new level of problems.

On the other hand, the formalists do now always win :-) Formalists reject the idea of making tests the cornerstone of software development. In my opinion they are squarely wrong; examples are the primary tools to do software development, and you can’t even understand if a specification is correct until you *test* it with examples.

The one thing that boths camps have in common is that they are both minority arts. Real OOP is almost as rare as Dijkstra-style program derivation. The common industrial practice is whateverism :-)

### Greed and Simple Design

Saturday, January 21st, 2012

Some people like Carlo say that the famous Four Elements of Simple Design by Kent Beck are an oversimplification. Perhaps it’s true, but still I find that they are a very useful compass. Consider again:

A design is simple when

1. Runs all the tests.
2. Contains no duplication
3. Expresses all the ideas you want to express.
4. Minimizes classes and methods

in this order.

Rule 2 is important, as it pushes us to invent abstractions that capture recurring patterns. But rule 3 is also imporant, as it pushes us to invent abstractions that correspond to the ideas that we want to express.

The other day I saw this post by Luca about a fun kata: implementing the scoring rules for a dice game called “Greed”. This exercise is part of the Ruby Koans, but its use as a programming exercise dates at least from the OOPSLA ’89 conference, when Tom Love proposed a contest to show how a program could be written in different ways and in different languages.

A little research shows many solutions for this problem. As this problem is presented in the context of a Ruby programming exercise, people usually tries clever tricks that exploit peculiar Ruby idioms. For instance:

``` def score(dice) (1..6).collect do |roll| roll_count = dice.count(roll) case roll when 1 : 1000 * (roll_count / 3) + 100 * (roll_count % 3) when 5 : 500 * (roll_count / 3) + 50 * (roll_count % 3) else 100 * roll * (roll_count / 3) end end.reduce(0) {|sum, n| sum + n} end ```

There’s a place for this sort of exercises, but it’s not the sort of programming that I would like my collegues to practice! If we apply the rule 3, I expect to see in the source cose some mention of the *rules* of the game. I expect that there’s a programming element that corresponds to the rule that “three ones are worth 1000 points”, etc. Really, it does not take all that much more effort, and I assert that it’s more fun to code expressively!

This is my solution:

``` class Array def occurrences_of(match) self.select{ |number| match == number }.size end def delete_one(match) for i in (0..size) if match == self[i] self.delete_at(i) return end end end end def single_die_rule(match, score, dice) dice.occurrences_of(match) * score end def triple_rule(match, score, dice) return 0 if dice.occurrences_of(match) < 3 3.times { dice.delete_one match } score end def score(dice) triple_rule(1, 1000, dice) + triple_rule(2, 200, dice) + triple_rule(3, 300, dice) + triple_rule(4, 400, dice) + triple_rule(5, 500, dice) + triple_rule(6, 600, dice) + single_die_rule(1, 100, dice) + single_die_rule(5, 50, dice) end ```

There's some more duplication that could be removed (the five similar rules could be expressed as a single rule) and the names could be improved, but I think this is the way to go. Make your code look like a model of the problem!

### Classes without a face

Thursday, January 5th, 2012

I have a feeling for classes that should not be there. When I saw this cartoon about “how an object-oriented programmer sees the world”, I was struck by the fact that all the names of objects were wrong! These are the names that would be chosen by a poor OO programmer. A good programmer would choose “Door” instead of “IndoorSessionInitializer”. But then the cartoon would not be funny :-)

A similar thing happens in the code base of our current project. Sometimes I see a class that strikes me as odd. Perhaps it has a name that does not communicate; more often it is simply a class that should not exist.

### On the folly of representing a first name with a String

Wednesday, January 4th, 2012

### Object-Oriented decomposition is supposed to be different

When I read Object Thinking, I was intrigued by this quote by Grady Booch:

Let there be no doubt that object-oriented design is fundamentally different from traditional structured design approaches: it requires a different way of thinking about decomposition, and it produces software architectures that are largely outside the realm of the structured design culture.

So it seems that OOD decomposes a problem in a way that is essentially different from what you would arrive at with other design methods. I was intrigued: I wonder what are these different, elusive ways of decomposing things.