Archive for the 'Essay' Category

TDD is not finished until the code speaks

Saturday, April 25th, 2009

A problem, and solutions that don’t seem right

I recently asked a few people to solve a little programming problem. In this problem, a number of towns are connected by one-way roads that have different distances. The programmer must write code that answers questions such as:

  • What is the distance of the path A-B-C-D?
  • How many paths are there from A to C that are exactly 4 steps long?
  • What is the distance of the shortest path from A to D?

The graph

I reviewed three different solutions; they were all valid, reasonably well written. Two had proper unit tests, while the third was checked by prints in a "main". The authors tried hard to write a "good" solution. There were no long methods; all the logic was broken down in small methods. And yet, I was not pleased with the results.

(more…)

Another look at the anti-IF campaign

Friday, September 19th, 2008

Last year, the Italian XP pioneer Francesco Cirillo launched his famous “anti-IF” campaign. The idea is bring down the complexity of programs by using object-oriented design; the most important trick in that bag is to replace case analysis by polymorphism. For instance, to replace

  if (shape.type == SQUARE)
    area = shape.getSide() * shape.getSide();
  else if (shape.type == TRIANGLE)
    area = (shape.getBase() * shape.getHeight())/2.0;
  else if (shape.type == CIRCLE)
    ...

by the much nicer

  area = shape.getArea();

There are many other useful tricks from OO design. But I’d like to talk about another set of tricks for eliminating IFs, namely the bag of tricks of algorithm design.

Boolean values are values too.

One simple trick is to replace

  if (foo and bar or baz)
    return true;
  else
    return false;

by

  return foo and bar or baz;

Now, some people prefer the first form, because they prefer operational reasoning over reasoning over boolean values (see the comments to a previous post on this subject). I much prefer the firstsecond, for one thing because it’s shorter. Concision is power. And I try to avoid operational reasoning as much as I can. If we push operational reasoning to the extreme we get:

  if (foo) {
    if (bar) {
      return true;
    }
  }
  if (baz) {
    return true;
  } else {
    return false;
  }

This should be equivalent to the previous two examples. But it’s not immediately obvious that it is. It requires much more thinking, and thinking is something I must save for important things.

Encapsulate IFs in well-known functions

Suppose you must find the maximum value of two numbers. Please don’t write

  if (x > y)
    return x;
  return y;

It’s much better to write

  return max(x, y);

Try to extend these two examples to the maximum of three numbers and see which one turns out clearer. The fact is that while we stay in the realm of expressions, we can exploit many nice properties, such that max is associative; in the realm of IFs it’s much more difficult to reason. You must simulate execution in your head, and that takes energy away from you; energy you might spend on other, more important problems.

Another example: replace

  if (x < 0) return -x;
  return x;

by

  return abs(x);

If your language does not provide an abs function, you can define it nicely as max(x, -x).

Zero is a perfectly good number

This code returns the sum of the elements of an array.

// ugly and wrong
int arraySum(int[] array) {
  int sum = array[0];
  for (int i=1; i < array.length; i++) {
    sum += array[i];
  }
  return sum;
}

Can you spot the error?

*
*     *

Yes, it breaks for empty arrays. If we keep it this way we'll be forced to pull ifs in the calling code, in all places where we call this method:

  // very ugly 
  if (x.length > 0) {
    s = arraySum(x);
  } else {
    ???
  }

This is ugly, and verbose. And it's not entirely clear what should happen when the length is 0. Should we give up and throw an exception? It's much better to stop treating 0 elements as an error. An empty array is a perfectly good array, and the sum of 0 elements is 0.

  // still ugly 
  int arraySum(int[] array) {
    if (array.length == 0) {
      return 0;      
    }
    int sum = array[0];
    for (int i=1; i < array.length; i++) {
      sum += array[i];
    }
    return sum;
  }  

Now it's correct, but there's no need to treat an empty array as a special case:

  // good
  int arraySum(int[] array) {
    int sum = 0;
    for (int i=0; i < array.length; i++) {
      sum += array[i];
    }
    return sum;
  }  

Now if the array is empty, the loop will be executed 0 times, since 0 < 0 is false. The result will be correct for all N ≥ 0.

Whenever you have to apply a binary operator such as "+" to a set of values, you should ask yourself what should be the answer for 0 elements. If the binary operator has a unit element, that is an element that leaves the other operand unchanged:

  0 + n = n + 0 = n, for all n
  1 * n = n * 1 = n, for all n
  max(−∞, n) = max(n, −∞) = n, for all n

the result of applying the operator to a set of 0 operands should be the unit element. The sum of 0 numbers is 0, the product of 0 numbers is 1, the maximum of 0 numbers is −∞. Why is that? Because it's the only logical answer. Think about this: if you split array A with 10 elements in two parts, A0 and A1, with 5 elements each, you expect that sumArray(A) be equal to sumArray(A0) + sumArray(A1). You expect exactly the same result if you split A so that A0 has 3 elements and A1 has 7. It's only natural to expect the same result if you split A such that A0 has 0 elements.

Calcolare gli assegnamenti

Thursday, September 18th, 2008

Ieri sera alla riunione del Milano XP User Group abbiamo parlato di come “calcolare” gli assegnamenti, ovvero, come fare in modo che uno statement conservi un invariante e allo stesso tempo faccia progresso verso la fine del loop. Ad esempio, supponiamo di voler calcolare una tabella di quadrati, con la limitazione che non possiamo usare la moltiplicazione. (Perché no? Magari perché sul nostro processore la moltiplicazione è lenta). Abbiamo due variabili s e n che soddisfano l’invariante s = n2. Vogliamo incrementare n, e conservare l’invariante. In altre parole, vogliamo trovare un’espressione E che soddisfi

{ s = n2 } s, n := E, n+1 { s = n2 }

In pratica questa è un’equazione in cui l’incognita E è l’espressione che manca per completare il programma

  
  while n != N do
     s, n := E, n+1
     println "il quadrato di " n " è " s
  end

L’obiettivo è essere in grado di calcolare l’espressione E che mi serve, senza doverci “pensare”. Qual’è allora la procedura? Per dimostrare

{ P } x := E { Q }

devo dimostrare che P implica Q[x:=E], dove Q[x:=E] significa sostituire x con E all’interno di Q. Nel nostro caso:

   s = n2[s, n := E, n+1]
sse // applico la sostituzione
   E = (n+1)2
sse // svolgo il quadrato
   E = n2 + 2n + 1
sse // sfrutto l’invariante
   E = s + 2n + 1
sse // le moltiplicazioni sono vietate
   E = s + n + n + 1

Voilà: ho dimostrato che s=n2 implica s=n2[s, n := E, n+1] se scelgo E=s+n+n+1. Il programma desiderato è dunque

  
  while n != N do
     s, n := s+n+n+1, n+1
     println "il quadrato di " n " è " s
  end

Per saperne di più: