Mathematics cannot prove the absence of bugs

December 18th, 2014

Everyone is familiar with Edsger W. Dijkstra’s famous observation that “Program testing can be used to show the presence of bugs, but never to show their absence!” It seems to me that mathematics cannot prove the absence of bugs either.

Consider this simple line of code, that appears in some form in *every* business application:

database.server.address=10.1.2.3

This is a line of software. It’s an assignment of a value to a variable. And there is no mathematical way to prove that this line is correct. This line is correct only IF the given IP address is really the address of the database server that we require.

Not even TDD and unit testing can help to prove that it’s correct. What would a unit test look like?

assertEquals("10.1.2.3", 
    config.getProperty("database.server.address"));

This test is just repeating the contents of the configuration file. It will pass even if the address 10.1.2.3 is wrong.

So what is the *only* way to prove that this line of code is correct? You guessed it. You must run the application and see if it works. This test can be manual or automated, but still we need a *test* of the live system to make sure that that line of code is correct.

Another example of Object Theater

October 30th, 2014

Mastermind

This year I gave my Web Applications students the task of writing a Mastermind game. You can see some of their work here, here and here. The game works like this: the computer starts a new game by inventing a random secret code, composed of 4 digits from 1 to 6. For instance: 5414.

The player must deduce what is the secret code by trying guesses. To continue the example if my first guess is 1234, the computer will answer “-+” which means that I got one number right (+) and another number is present in the secret code but in a different position (-). Of course, I don’t know which! I can then try more guesses, until I have enough clues to guess right.

You get a score for each completed game, that is equal to the number of guesses. Of course, the lower the score, the better.

A player will earn a score that is the average of all the games he or she completed.

Procedural!

Teaching Object-Oriented programming was not one of the goals of the course. Therefore, most programs I got were very procedural. (Please note that the example that follows is from a good student and his work earned a very high score in my course. It was a good web application, even though it was not object-oriented).

I’d like to show an example of procedural code. This is a controller object that handles a “guess”.

public void guess() throws IOException {
  String gameId = request.getParameter("game_id");
  String code = gamesRepository.findSecretCode(gameId);
  String guess = request.getParameter("guess");
  String answer = compareCodes(code, guess);
  String player = gamesRepository.find_game_player(gameId);
  guessesRepository.createGuess(gameId, player, guess, answer);
  gamesRepository.incrementGameScore(gameId);

  // if game is won
  if(answer.equals("++++")){
    gamesRepository.setFinished(request.getParameter("game_id"));
    int oldScore = gamesRepository.getPlayerScore(player);
    playersRepository.addFinishedGame(player, oldScore);
  }

  // return json data
  response.getWriter().write(toJson("answer", answer));
}

This is classic procedural code; the game logic is found in the controller (method compareCodes()) and the repositories (three repositories!). There are no domain objects.

The database structure is something like

       1      *      1      *
player -------- game -------- guesses

The gameRepository adds a row to the games table when a new game is added. The guessesRepository adds a row to the guesses table when a new guess is guessed. The controller must take care to call the proper repositories at the right time. The controller “knows” the database structure; if the database structure changes, the controller code will probably also change.

Object-Oriented

What I’d like to do instead is

  • Domain objects that handle all the game logic
  • No logic in the repositories or the controller
  • Just one repository is enough, thank you. The repository should take care of adding rows to the proper tables.

The domain object should probably be the MasterMind game.

game.guess(request.getParameter("guess"));

The “guess” message is what sets the domain logic in motion. The controller should not need to know anything else.

Q. What if the game is won? Who updates the player’s score?

A. The game object should do that.

Q. Where do we see that the game updates the player’s score?

A. Not in the controller. The controller does not know or care. Handling victories is something that is done in the game object.

Q. Really! How do we update the player’s score?

A. The game probably knows its player, and tells it to update its score if the game is won.

Q. How does the game get a reference to its player?

A. The controller does not know. But see next question.

Q. Where do we get the game object from?

A. From a repository, of course. We suppose that the repository will return it with all the dependencies that it needs to have. If the game needs a reference to its player, the repository must take care to set it up.

String gameId = request.getParameter("game_id");
Game game = gamesRepository.findGame(gameId);

game.guess(request.getParameter("guess"));

Q. How is the state of the game persisted?

A. By asking the repository to save the game.

// Here we are in the infrastructure world
String gameId = request.getParameter("game_id");
Game game = gamesRepository.findGame(gameId);

// Now we pass to the realm of pure domain logic
game.guess(request.getParameter("guess"));

// And now we return to the infrastructure world
gamesRepository.save(game);
response.getWriter().writer(toJson(game));

So we have seen another example of object theater. The infrastructure details are dealt with before and after the main action. The main action is sending “guess” to the game object. There is where the functional requirements is dealt with. Before that, and after that, is infrastructure code that deals with non-functional, performance requirements.

Object-Oriented Theater

October 30th, 2014

Update 30/10/14: I first read about the “Object Theatre” in a message by Anthony Green on the GOOS mailing list; I didn’t remember it consciously when I wrote this post, but it certainly has been working in my brain ever since. The “Object Theatre” metaphor is his invention, not mine. Thanks Kevin and Anthony for pointing it out.

Yesterday evening I attended a good introduction on functional programming by Mario Fusco. One of his slides illustrated a well-known principle of functional programming. There are pure functions, and “functions” with side effects. Good FP style suggests to keep non-pure functions at the edges of the program, so that the inner core of your program only contains pure, mathematical functions.

He showed this picture

http://www.slideshare.net/mariofusco/if-you-think-you-can-stay-away-from-functional-programming-you-are-wrong/41

In fact, a very similar picture can be drawn for good object-oriented style. In good OO style, you want to separate your domain objects from infrastructure objects. The domain objects contain just domain logic, they execute in memory and have no references to file system, databases or network services. They can be mutable, of course! But they are “pure logic” in the sense that they live in a platonic world where we are only concerned with the functional behaviour of our programs.

Infrastructure objects, on the other hand, deal with everything else: user interface, databases, file systems, web services… All the things that are needed to connect our platonic world of objects to the outside world.

So what’s good OO style in this context? In my opinion, it’s good to keep the infrastructure objects in an outside shell, while the inner core of the program contains only pure domain objects. Let me give you an example.

Suppose you have a Counter object that needs (for non-functional reasons!) to be made persistent. The functional logic is about incrementing and decrementing the value of the counter. The infrastructure logic is about making sure that the counter retains its value across reboots (which is definitely a non-functional requirement.)

The wrong way to do this is

// bad style! don't do this
class Counter {
  public Counter(int id, CounterDao dao) {
    this.id = id;
    this.dao = dao;
  }

  public void increment() {
    value++;
    dao.incrementCounter(id);
  }

  private int value = 0;
  private int id;
  private CounterDao dao;
}

The usage of this counter would be

CounterDao dao = ...;
Counter counter = new Counter(123, dao);

// here we perform logic and also persist the state
counter.increment();

The above example is bad style, because it mixes persistency logic with functional logic. Bah! A better way to do it is:

class Counter {
  public void increment() {
    value++;
  }
  private int value = 0;
}

See? Only pure logic. We don’t care about persistency there. We could use the counter this way:

// we start this use case in the world of infrastructure
CounterDao dao = ...;
Counter counter = dao.findCounter(id);

// here we enter the world of pure logic
counter.increment();

// here we return to the world of infrastructure
dao.save(counter);

I like to call this structure “object theatre”. Imagine your domain objects as actors in a play. You want to setup a scene where your actors are set up in a certain way: Arlecchino talks to Colombina, Colombina has a fan in her hand, etc. When the scene starts, the actors perform each according to their character. When the scene ends, we lower the curtain.

I imagine that an object-oriented system works the same way. When a request arrives, we set up the scene by retrieving all the proper objects from various repositories and we connect them appropriately. This is setting the scene. Then we send a message to one object that sets some logic in motion. The objects send messages to each other, carrying out a computation. This is playing out the scene. When the domain objects are done, we conclude by returning the objects to their respective repositories. This is lowering the curtain.

A summary of my XP2014

June 2nd, 2014

TL;DR: The conference was pleasant, energetic and informative. The organizers did an excellent job of coordinating a very large program. The city, of course, is awesome. I met old and new friends, and I learned a lot.

I had the chance to present my two sessions to some great people. Just 5-7 people partecipating in each session, but they were the right people.

I attended an introduction to Continuous Deployment by Luca Minudel. Luca presented in a very clear and concise way what CD is and what are its main components. From this session, and another one by Seb Rose, I see that the idea of “CD pipelines” has now become common.

I attended the keynote by Pekka Abrahamsson. Pekka jokes that while working in his academic ivory tower, his group managed to create innovative products with undergraduate students. I was impressed by an experimental system for getting rid of a dangerous parasyte in beehives.

I attended the keynote by Robert Martin. I actually had a chance to shake hands with the man. Martin’s thesis is that we need to do something to improve the standards of quality in our profession, before some ugly software-related incident forces the regulators to do that to us. For Martin, the best chance for us to do that is to apply the practices of Extreme Programming. XP was invented by Kent Beck “to heal the divide between business people and programmers.” In recent times, Martin sees that the Agile Movement is increasingly concerned with project management, and less and less interested in the programming practices. This drove away many programmers from Agile, who then formed the Software Craftsmanship movement. And this is a massive #fail: the divide is back. XP still has the potential to heal this divide. By focusing on the practices, not just the principles or the values, XP helps programmers achieve better results more cheaply. But you must stress the practices. As Uncle Bob puts it, “do we do the practices? Or do we ‘let the team decide’”? The team, says Martin, has an agenda. If the team refuses to do a practice, he says, it’s because the team has something to hide.

I attended the keynote by Joshua Kerievsky. Joshua presented his view of “safety at work”, which is both metaphoric (as in “I caused a small injury to my customer by exposing him or her to a bug”) and literal. I can see how seeing things from the point of view of “safety” leads, in part, to the same kind of process improvements that you would do with the goal of improving quality: find the root cause of defects and remove them, and so on. But safety is more general; it’s not just protecting money. As you can see on Industrial Logic’s website, it’s a more general thing. I think I like Joshua’s idea. It resonates with the teachings of Tom Gilb, who is always stressing good engineering. Excellent engineering is always keen on safety at work. Joshua quoted the engineer who built the Golden Gate Bridge, with far less fatalities than were expected by the statistics of the time. I add that the same is true of Filippo Brunelleschi, the engineer who built the dome of the Florence Cathedral.

On a side note, I was finally convinced to try Industrial Logic’s elearning resources. I can’t wait to have time to get into it.

Another note: someone asked Joshua what he made of the current debate on “TDD is Dead”. Joshua joked that his company celebrated the funeral of TDD by issuing a 50% discount on their TDD elearning modules. It was so successful that some of his customers asked if BDD was also going to die soon :-) He said that by and large DHH is grossly misinformed on TDD, even though he has a few good points.

I attended a session by Steve Holyer and Nancy Van Schooenderwoert, on how a coach can maximize his or her chances of success before accepting a coaching engagement. The main thing I got from the session is a better understanding of Jerry Weinberg’s principle of Organizational Addiction. I had read Quality Software Management Vol. 3, but I never got round to think how to use the addiction model as a tool for making sense of what happens at work, and what to do about it.

I attended a session by Rebecca Wirfs-Brock on how to deal with complex requirements. This session was going a bit too fast for me :) I can’t say I was able to follow the exercise, but it gave something to chew on later.

I attended an interesting prentation by Seb Rose on a tool for assessing the quality of a suite of tests by introducing bugs and seeing if the tests detect them.

I partecipated in a very interesting game by Michele Finelli on Alerting, Logging and Monitoring. I was later given a private lesson by Michele on this very subject. I think Michele should write something on the subject; most developers, including me before Michele’s lesson, have very foggy notions on this subject.

I noticed that some people were wearing tee shirts advertising XP2015 in Finland! And at dinner on Tuesday I learned that they had already decided the location for XP2016! Me and my friends organizers for the Italian Agile Day should learn something from this :)

An introduction to contemporary web application techniques

February 8th, 2014

I just attended CodeJam: Cutting edge web application development, a two-days course taught by Gabriele Lana and Sandro Paganotti and organized by Avanscoperta. (Disclaimer: I also offer courses through Avanscoperta and I had previously worked together with Gabriele.)

What was it like? In short, it’s like watching the reasoning, techniques and tools of two master software craftsmen at work. This was not an easy tutorial; the process was not simplified to make it easier to follow. Gabriele and Sandro applied all of their usual tools and process. This is the main value of the course; you can find as many Node or Angular tutorials, books and videos as you want. What you will not easily find is how accomplished Node or Angular professionals actually go about their business.

Gabriele in particular is fond of the “play by play” video series by Peepcode, where you can watch over the shoulder of an accomplished professional how they solve a problem. There is a lot of practical, untold information in that; information that we as a community don’t yet have a way of sharing. It’s about technique; how you use the shell, the editor, the keyboard. How you use screen real estate. What tradeoffs you make when you have to choose finessing versus delivering.

My particular interest in this course was to get up to speed on contemporary techniques for building web application. The old school of building applications by serving a new page of procedurally-generated html at every request is, well, old school. The new school is a single html page that contains a full application written in Javascript, while the server provides only REST persistence services.

The main gist of the course is that Sandro and Gabriele build an order-taking application for a restaurant. There is a mobile site that will be used by the waiter taking orders on a mobile phone, and a kitchen site that will be used by the cooks to see what dishes have been requested, and signal that the dishes are ready. The application is built with Angular in the frontend, and Node in the backend.

Participants follow the development on a Git repository, were a sequence of stages is saved in different branches. You could follow what the trainer was doing on your own machine and make experiments; or you could just watch what the trainer was doing, and listen to what he was saying. It didn’t matter if you didn’t complete the section (I rarely could) because the next stage would start from a fresh checkout of the next branch from Git.

The pace of the course was intense. The two trainers packed a *lot* of useful information. They showed different ways of doing things; for instance, the kitchen web site was built on just two files (one html and one js) and received events from the backend via websockets. The mobile site instead was built on a rails-like directory layout, with separate folders for models, controllers, etc; and it received events from the backend using Server-Side Events. Gabriele showed many different Node and Mongo techniques and libraries; Sandro showed many different Angular techniques.

Of particular interest was project automation with Grunt, which integrated many different tools together. For instance, whenever you saved a file, it would automatically run JSHint (check code style) and Karma (run the automated tests) and then reload the application in the browser. The initial directory layout was built with Yeoman. The introduction of Javascript and CSS libraries was managed by the Bowell package manager.

Gabriele explained his screen management philosophy: he uses a tiling window manager; how he completely remapped his modifier keys to type faster and with less strain on his body; how he uses Vim, how he uses the Unix shell.

I was impressed with Node. It’s an engineering marvel.

Last but not least, I should mention the course organizer Alberto Brandolini who provided us with delicious food during the day, lunch and evening. Thanks Alberto!

I invite you to check out the repository on Github. I’m very happy I attended this course!

Il mio corso di TDD

December 27th, 2013

TL;DR: what my public TDD course is about.

Imparare il TDD

In gennaio terrò il mio primo corso pubblico di TDD. Grazie ad Alberto Brandolini che lo ha organizzato.

La promessa del TDD

Scrivere programmi che funzionano correttamente gia dalla prima volta che vengono messi in produzione. Sentirsi dire dal cliente che i pochi difetti riscontrati si verificano talmente di rado che non vale neanche la pena di correggerli.

Arriva il cliente con una nuova richiesta. Ha cambiato idea di nuovo. Il nuovo requisito coinvolge una serie di nuovi casi speciali. Non è un problema: concordiamo con il cliente alcuni esempi concreti e lo risolviamo. Il nostro codice è malleabile: il nuovo requisito del cliente spesso richiede solo di creare nuove implementazioni di interfacce esistenti; in questi casi il design supporta facilmente l’introduzione del nuovo requisito. In altri casi invece dobbiamo modificare la struttura del codice per poter inserire bene il codice nuovo. Lo facciamo in maniera controllata, a piccoli passi. Il nostro codice è semplice perché è stato sviluppato a partire dai test: la maggior parte delle volte, il codice che abbiamo prodotto era molto più semplice di quello che ci aspettavamo di dover scrivere.

Ti ricordi? Una volta scrivevamo codice per, tipo, una settimana. Poi passavamo una o due settimane a farlo funzionare, debuggando con le scritte sulla console. Ora il debugging non lo facciamo quasi più. Il codice viene pensato, testato, scritto ed integrato nel giro di minuti, non giorni.

Le difficoltà

Non è stato facile. Io ci ho messo anni per cominciare a capire come funziona il TDD. Sono stati anni produttivi: ogni nuovo gradino di comprensione mi ha permesso di scrivere codice migliore. E ogni nuovo gradino diventava un plateau da cui non era facile scorgere quale fosse il prossimo passo.

Mi sono dovuto ricredere su tante cose. Ho imparato a tenere i framework, il sistema operativo, le librerie al loro posto, per essere sicuro di mantenere il controllo della situazione. Non voglio essere alla mercé di quello che un framework può o non può fare, dei suoi bug e delle sue idiosincrasie. Il codice della mia applicazione adesso è separato dal codice che serve a parlare con il framework.

Ho imparato che il design è importante. Carlo Bottiglieri ha detto “Se il design non lo sai fare, il TDD non te lo insegna.” E’ per questo che dopo i primi successi con il TDD, non riuscivo a migliorare. Ho cercato chi potesse insegnarmelo; ho comprato libri di seconda mano perché l’argomento “design” è talmente fuori moda che i libri non li ristampano più. Alcuni libri che pure avevo letto prima, come il famoso libro dei pattern, non mi avevano detto granché. Ora li rileggo e capisco meglio che cosa vogliono dire.

Ho capito che il design è un modo di pensare; Francesco Cirillo ripeteva “E’ semplice ma non è facile”. Ora capisco che l’idea che avevo io di “semplice” era tutta sbagliata. Capisco anche perché molti programmatori confondono il “semplice” con il “facile”, perché anch’io ho vissuto questa confusione. Coesione e accoppiamento adesso hanno un senso; le vedo nel codice.

E non è finita. Continuo a imparare nuove cose, da libri, blog, colleghi e situazioni sul lavoro. E’ bello quando affrontiamo un problema in pair programming, facendo TDD come si deve e quindi scrivendo solo il minimo che serve per fare passare il test, rimuovendo la duplicazione a ogni passo, e otteniamo codice molto più semplice del previsto. Salta fuori che il TDD è una tecnica di design… proprio come dicevano i libri!

Il corso di TDD

Dal 2007 al 2012 ho lavorato in Sourcesense/XPeppers come coach del team Orione. Abbiamo affrontato molti problemi con TDD. Abbiamo insegnato il TDD (con tutti i limiti della nostra comprensione dell’argomento) presso i nostri clienti. Abbiamo fatto un corso pubblico di TDD di un giorno, che ha avuto un discreto successo.

Dall’aprile 2012 sono tornato consulente indipendente. Faccio il coach freelance presso organizzazioni che vogliono migliorare la maniera in cui lavorano con il software.

Il corso di due giorni sul TDD è stato provato più volte presso i nostri clienti, insieme al mio collega coach Antonio Carpentieri. Nel maggio del 2013 sono stato invitato dagli amici di DotNetToscana a fare un corso di TDD di due giorni. Dopo quell’esperienza ho sentito che il corso era pronto per essere presentato al pubblico.

La prospettiva

Qual’è la prospettiva attraverso cui mostro il TDD?

  • Il TDD è divertente. È semplice. Cerco sempre il senso del semplice e del divertente nella programmazione. Per questo sono molto sospettoso quando sento parlare di “framework per testare X”. Mi puzza di complicazione. Per iniziare a fare TDD non c’è nemmeno bisogno di xUnit.
  • Il TDD è una tecnica di design. Per me non è uno slogan, è una cosa che pratico per davvero. Per questo quando parlo di TDD parlo anche e soprattutto di come fare design.
  • Si parte sempre da una user story. Sviluppiamo per realizzare una richiesta del cliente; per questo il primo test spesso è un acceptance test.
  • Acceptance test non significa end-to-end test. Sono due concetti ortogonali; un AT può anche essere un test end-to-end, ma spesso non lo è.
  • Separa la parte facile da testare da quella difficile da testare. Concentra tutta la logica interessante nella parte facile da testare.

I miei maestri? Ho imparato da Francesco Cirillo e Carlo Bottiglieri. Considero fondamentali il libro di Kent Beck, Test-Driven Development: By Example, il libro di Steve Freeman e Nat Pryce, Growing Object-Oriented Software Guided By Tests. Condivido molte delle cose che scrivono Arlo Belshee e J.B. Rainsberger. Trovo utili i video di Robert Martin e quelli di Piergiuliano Bossi.

In conclusione

Per me il TDD è uno strumento per migliorare. Senza il TDD, ogni programmatore arriva al suo livello naturale di competenza e si ferma lì. Con il TDD, e il design che devi imparare per fare bene TDD, hai un percorso di miglioramento che non finisce mai. Il mio corso può esserti utile se:

  • hai provato qualche volta a fare TDD, ma non è rimasto nella tua pratica quotidiana.
  • Hai fatto qualche progetto con TDD ma il codice che ne è uscito fuori era un pasticcio.
  • Vorresti iniziare a fare TDD ma non sai da dove cominciare.
  • Fai correntemente TDD e hai qualche dubbio da sciogliere per passare al prossimo livello.
  • Senti sempre parlare di “design” ma non hai mai trovato un libro che te lo spieghi bene.

Se ti riconosci in uno di questi casi, probabilmente posso aiutarti, perché sono stadi da cui sono passato anch’io.

Notes on exception handling

November 20th, 2013

If a function be advertised to return an error code in the event of difficulties, thou shalt check for that code, yea, even though the checks triple the size of thy code and produce aches in thy typing fingers, for if thou thinkest “it cannot happen to me”, the gods shall surely punish thee for thy arrogance.

Henry Spencer’s 10 Commandments for C Programmers

In the olden days, before Exceptions were invented, we wanted to write code like this:

// Called when the user presses the "triple" button
void onTripleButtonPressed() {
  String valueEntered = readValueFromField();
  int valueToDisplay = triple(stringToInteger(valueEntered));
  displayResult(integerToString(valueToDisplay));
}

int triple(int value) {
  return value * 3;
}

Unfortunately, we were forced to write code like this instead:

int onTripleButtonPressed() {
  StringHolder valueEntered = new StringHolder();
  int resultCode = readValueFromField(valueEntered);
  if (isError(resultCode))
    return resultCode;

  IntHolder valueAsInteger = new IntHolder();
  resultCode = stringToInteger(valueAsInteger, valueEntered.value);
  if (isError(resultCode))
    return resultCode;

  resultCode = triple(valueAsInteger);
  if (isError(resultCode))
    return resultCode;

  StringHolder valueToDisplay = new StringHolder();
  resultCode = integerToString(valueToDisplay, valueAsInteger.value);
  if (isError(resultCode))
    return resultCode;

  resultCode = displayResult(valueToDisplay.value);
  if (isError(resultCode))
    return resultCode;

  return OK; // :-)
}

private int triple(IntHolder valueAsInteger) {
  int result = valueAsInteger.value * 3;
  if (isOverflow(result)) 
    return ERROR_OVERFLOW;
  valueAsInteger.value = result;
  return OK;
}

Yes, we were FORCED to check the result of each and every operation. There was no other way to write reliable software. As you can see:

  • Code size is more than three times
  • Logic becomes obscure
  • Functions are forced to return two values; the intended result AND an error code.
  • You always, always, always had to return an error code from all functions.

There was no alternative, until exceptions came along. Then we were finally able to write simple code in a reliable way:

// Called when the user presses the "triple" button
// Exceptions are handled here
void onTripleButtonPressed() {
  try {
    tryOnTripleButtonPressed();
  } catch (Exception e) {
    alertUser(e.getMessage());
  }
}

// Business logic is handled here
private void tryOnTripleButtonPressed() {
  String valueEntered = readValueFromField();
  int valueToDisplay = triple(stringToInteger(valueEntered));
  displayResult(integerToString(valueToDisplay));
}

In this example, all exception handling is centralized at the point where the GUI gives control to our code. The code that performs what the user really wanted is in another function, which contains exactly the same clean code of the first example.

Thus, the invention of exception handling allows us to cleanly separate code that performs the happy path, and code that handles the many possible exceptional conditions: integer overflow, I/O exceptions, GUI widgets being improperly configured, etc.

What you saw in the previous example is the

Fundamental Pattern of Exception Handling: centralize exception handling at the point where the GUI gives control to our code. No other exception handling should appear anywhere else.

It turns out that the Fundamental Pattern of Exception Handling is the only pattern that we need to know. There are other cases where we are tempted to write a try-catch, but it turns out that we nearly always have better ways to do it.

Antipattern: nostalgia (for the bad old days)

There is an ineffective style of coding that we sometimes see in legacy code:

void onTripleButtonPressed() {
  String valueEntered = null;
  try {
    valueEntered = readValueFromField();
  } catch (Exception e) {
    logger.log(ERROR, "can't read from field", e);
    return;
  }

  int valueEnteredAsInteger = 0;
  try {
    valueEnteredAsInteger = Integer.parseInt(valueEntered);
  } catch (Exception e) {
    logger.log(ERROR, "parse exception", e);
    return;
  }

  int valueToDisplay = 0;
  try {
    valueToDisplay = triple(valueEnteredAsInteger);
  } catch (Exception e) {
    logger.log(ERROR, "overflow", e);
    return;
  }

  try {
    displayResult(integerToString(valueToDisplay));
  } catch (Exception e) {
    logger.log(ERROR, "something went wrong", e);
    return;
  }
}

As you can see, this is just as bad as the olden days code! But while in the old times we had no alternatives, now we do. Just use exceptions the way they were intended to.

Stay up no matter what

Sometimes there is a feeling that we should write code that “stays up no matter what happens”. For instance:

private void tryOnTripleButtonPressed() {
  String valueEntered = readValueFromField();
  int valueToDisplay = triple(stringToInteger(valueEntered));

  try {
    sendEmail(userEmail, "You tripled " + valueEntered);
  } catch (Exception e) {
    logger.log(WARNING, "could not send email", e);
    // continue
  }

  displayResult(integerToString(valueToDisplay));
}

Here we have added an email confirmation; whenever the user presses the button, they will also receive an email. Now we should ask ourselves which of the two:

  1. is the sending of the email an integral and fundamental part of what should happen when the user presses the button?
  2. Or is it an accessory part should never stop the completion of the rest of the action?

This is a business decision. If the business decides on 1., then we should remove the try-catch. By applying the Fundamental Pattern, proper logging of the exception will be done elsewhere. We end up with:

private void tryOnTripleButtonPressed() {
  String valueEntered = readValueFromField();
  int valueToDisplay = triple(stringToInteger(valueEntered));
  sendEmail(userEmail, "You tripled " + valueEntered);
  displayResult(integerToString(valueToDisplay));
}

Clean code again. Yay!

If the business decides on 2., then we cannot allow sendMail to throw exceptions that might stop the processing of the user action. What do we do now? Do we have to keep the try-catch?

The answer is yes, but out of the way. There are two ways to do this: the easy way and the simple way. If you do it the easy way, you will move the try-catch inside the sendMail function. You will get code like this:

private void tryOnTripleButtonPressed() {
    String valueEntered = readValueFromField();
    int valueToDisplay = triple(stringToInteger(valueEntered));
    sendMail(userEmail, "You tripled " + valueEntered);     
    displayResult(integerToString(valueToDisplay));
  }

private void sendMail(EmailAddress email, String message) {
  try {
      trySendEmail(email, message);
  } catch (Exception e) {
    logger.log(WARNING, "could not send email", e);
    // continue
  }
}

You have moved exception handling for sending mail to a dedicated function (good). However, you still have complicated code tightly coupled to the user action. The sending of email, which is an accessory operation, makes understanding the fundamental operation more difficult (bad!)

What is the simple way, then? In the simple way we eliminate the tight coupling betweeen tripling the number and sending the email. There are several patterns that we might use; a typical choice would be “publish-subscribe”. We set up a subscriber that waits for the “User Pressed the Triple Button” event. In the main onTripleButtonPressed function we don’t know nor care what the subscriber does. It might send email, write logs, compute statistics, or maybe distribute the event to a list of other subscribers. We don’t know nor care! The code looks like this:

private void tryOnTripleButtonPressed() {
  String valueEntered = readValueFromField();
  int valueToDisplay = triple(stringToInteger(valueEntered));
  subscriber.notifyTripled(valueEntered);
  displayResult(integerToString(valueToDisplay));
}

In the Simple way, the Fundamental Pattern has been respected: for the subscriber object, the start of the processing is within the notifyTripled method. We have clean and loosely coupled code.

Avoid resource leaking

Another time when we are tempted to use a try-catch in violation of the Fundamental Pattern is when we open some resource that must be closed.

void doSomethingImportant() {
  Reader reader = null;
  try {
    reader = new FileReader("foo bar");
    doSomethingWith(reader);
    reader.close();
  } catch (Exception e) {
    if (null != reader) {
      reader.close();
    }
    // report the exception to the callers
    throw new RuntimeException(e);
  } 
}

It is correct that we close the reader in the event that something within doSomethingWith throws an exception. But we don’t want the catch; a finally clause is better:

void doSomethingImportant() throws IOException {
  Reader reader = null;
  try {
    reader = new FileReader("foo bar");
    doSomethingWith(reader);
  } finally {
    if (null != reader)
      reader.close();
  } 
}

This way, we don’t even need to worry about rethrowing the exception to our callers. The code is both correct and clear.

Irritating APIs

Some APIs force us to use more try-catches that we’d like. An example in Java is how to read data from a database using JDBC:

PreparedStatement statement = connection.prepareStatement("...");
try {
    ResultSet results = statement.executeQuery();
    try {
        while (results.next()) {
            // ...
        }
    } finally {
        results.close();
    }
} finally {
    statement.close();
} 

Here the problem lies in how the JDBC API is defined. We can’t change it of course, but we should encapsulate all uses of the JDBC API in a single class, so that we don’t have to look at this code anymore. Treat that class as your adapter to JDBC. For instance:

Results results = new Database(connection).read("select * from foo");
  // use results

Of course our object is a lot less flexible than the JDBC API. This is expected: we want to decide how we use JDBC and encapsulate that “how” in a single place.

How to host a JavaScript coding dojo

June 27th, 2013

Yesterday I organized a JavaScript coding dojo for my current customer’s programmers. Here’s what we did. Disclaimer: I’m not a JavaScript ninja, nor were most of the dojo participants. We need to keep it simple. We also need to make it fit within two hours.

I wanted the exercise to have a user interface. Why? Because we did many domain-only dojos, and I think we’re missing something. The UI is important: it makes the exercise more fun, and it teaches you lots. So, where to start? My starting point is the principle of domain-view separation. We want an extremely dumb UI that calls to the domain logic whenever it needs something.

My other point is that I don’t want to test the UI. We don’t have time to setup something like Selenium, and since we don’t know much JavaScript we would waste a lot of time. So we write a small UI, without tests, checking how it looks in the browser at every step. This is important: we don’t have the tests, but we have the browser to give us high-quality feedback. Also remember to keep the JavaScript console open when you do this.

The starting setup

We start from an empty exercise setup, that contains four files (not counting library files). For exercise “foo”, we have:

  • foo.html is the UI
  • foo.js contains the logic
  • foo_test.html is the test runner
  • foo_test.js contains the tests

The starting foo.html looks like this:

  
<html>
  <head>
    <title>FOO</title>    
  </head>
  <body>
    <h1>Foo!</h1>
    <script src="lib/jquery.min.js"></script>
    <script src="foo.js"></script>
    <script>
      $(document).ready(function() {
        console.log("hello");
      });
    </script>
  </body>
</html>

The starting foo.js is empty. The test runner is

<html>
  <head>
    <title>Foo Test</title>
    <link rel="stylesheet" href="lib/qunit.css">
  </head>
  <body>
    <div id="qunit"></div>
    <div id="qunit-fixture"></div>
    <script src="lib/jquery.min.js"></script>
    <script src="lib/qunit.js"></script>
    <script src="foo.js"></script>
    <script src="foo_test.js"></script>
  </body>
</html>  

and foo_test.js is simply

module("Foo");

test("it works", function() {
  equal(false, true);
});  

As you can see, I setup jQuery and QUnit. These are our tools.

The demo

The dojo agenda is:

  1. Introduction and demo. I solve a simpler exercise in front of the audience, with a video beamer.
  2. Explain the problem to be solved.
  3. Someone in the audience builds the UI and the first broken test.
  4. Someone else passes the test, and demonstrates that it works in the browser too! Watch out for when the tests are green and the UI is broken :) Then this person writes the next broken test.
  5. And repeat the last step until finished.
  6. Retrospective

My demo exercise was “write an application that allows me to enter a number, and prints that number doubled. Let’s call it Doubler“. The UI I built was

<html>
  <head>
    <title>doubler</title>    
  </head>
  <body>
    <h1>doubler!</h1>

    <p id="display"></p>
    <p><input type="text" name="number" value="" id="number"/></p>

    <script src="lib/jquery.min.js"></script>
    <script src="doubler.js"></script>
    <script>
      $(document).ready(function() {
        var $number = $("#number");
        var $display = $("#display");
        $number.change(function() {
          $display.html("The input is changed and its value is " + 
            $number.val());
        });
      });
    </script>
  </body>
</html>  

When I have this working I can start the TDD process. I know that all I need to do is to replace the hard-coded expression “The input … ” with a call to a domain object, like

  $display.html(new Doubler().process($number.val()));

Now I have the power of TDD at my disposal. I can make Doubler do what I want, and I know it will work in the UI. And Doubler is completely unaware that there is a UI at all; all it has to do is implement the pure domain logic. At this point, I also show how to implement an object (in the sense of object-oriented programming) in JavaScript. There are a few hundreds different ways to do that; the style I used is like this:

function Counter() {
  var itsValue = 0;
  this.increment = function() {
    itsValue++;
  }
  this.value = function() {
    return itsValue;
  }
}  

The “itsValue” local variable is accessible to the two methods (since they are closures), but is completely incapsulated and inaccessible from the outside. A test for this object could be

  test("It increments its value", function() {
    var counter = new Counter();
    counter.increment();
    counter.increment();
    equal(counter.value(), 2);
  });

After this, I proposed the team to solve stage 1 of the Tic-Tac-Toe Application Kata by Ralph Westphal. It’s simple enough to be finished in a hour, yet not trivial. One point to watch out for: we want the Game object to contain the state of the game. We don’t want to rely on the UI to tell the domain object what is the state of a given cell.

If you want a harder challenge, you could implement the Game of Life :)

Discussion

It took me a few test trials to arrive at this dojo format. I experimented with an explicit model-view-controller architecture, where you TDD also the “glue” code that I write in the UI file. This can be done by using html fixtures in the test code. But it didn’t give me the right feelings; it felt unpleasant to work with.

One thing that I surely want to avoid is the approach shown in chapter 15 “TDD and DOM Manipulation” of the (otherwise useful) Test Driven JavaScript development book. The approach in that chapter is to decide a design upfront, such as MVC, and then unit test that the controller connects itself in a certain way to the view and the model. Why I don’t like it? Because the design does not emerge from a test. Because it’s testing the implementation, not the specification. Because I want my first test to be about the valuable logic of the problem, not about a technical detail. Because I want every test to be about a valuable business behaviour.

The format I came up with feels right to me. It’s appropriate for beginners. I’m not sure that you would want to work always this way when you do production work, but it certainly is a good starting point.

Have fun!

A fundamental design move

May 30th, 2013

Pardon me if the content of this post looks obvious to you. It took me years to understand this!

When doing incremental design, I found that there is one fundamental move that makes design emerge. It is

  1. I recognize that one thing has more than one responsibility;
  2. then I separate the responsibilities by splitting the thing into two or more smaller things;
  3. and I connect the smaller things with the proper degree of coupling.

For instance, in the description of the domain of the game Monopoly we have this sentence:

A player takes a turn by rolling two dice and moving around the board accordingly.

A straightforward translation of this into code is

class Player {
  // ...
  public void takeTurn() {
    int result = Math.random(6) + Math.random(6);
    this.position = (this.position + result) % 40;
  }      
}

Can you see that Player#takeTurn() does two things? You can see this from the beginning by the wording: “by rolling two dice *and* moving around the board”.

You can also see it when you try to write the test:

@Test public void playerTakesTurn() {
  Player player = new Player();
  assertEquals(0, player.position()); // initial position
  
  player.takeTurn();
  assertEquals(???, player.position()); // how should I know???
}

We can’t write the last assertion because we have no control over the dice.

The standard way to solve this is to move the responsibility of extracting a random result to a separate class.

@Test public void playerTakesTurn() {
  Dice dice = new Dice();
  Player player = new Player(dice);
  //...
}

class Player {
  public void takeTurn() {
    int result = this.dice.roll();
    this.position = (this.position + result) % 40;
  }      
}

This still does not solve our problem, since Dice#roll() will still produce a random result that we have no control over. But now we have the option of making the coupling between Player and Dice weaker, by making Dice an interface and passing a fake Dice implementation that will return the result that we want:

@Test public void playerTakesTurn() {
  Dice dice = new FakeDiceReturning(7);
  Player player = new Player(dice);
  assertEquals(0, player.position()); // initial position

  player.takeTurn();
  assertEquals(7, player.position()); // now it's easy
}

This design move, that is almost forced by the need to write a clear test, has a number of advantages:

  • Now we have the option to pass different dice implementation. If the rules ever called for using different kinds of dice (for instance, 8-sided dice) we would not need to change the Player class at all.
  • Now we can test the Dice implementation separately. We might thus discover the bug in the above random number generation code. (did you see it?) :-)

So, to summarize, the fundamental design move is in three steps:

  1. You realize a portion of code has two responsibilities, perhaps because the test is hard to write.
  2. You separate the portion of code in two parts; this is usually some form of “extract method“.
  3. You’re not finished yet, because you still have to decide what kind of coupling you want between the two parts. Usually, a simple extract method will produce a coupling that is too tight. More work is required to ease the coupling to the degree that you want.

It is simply a consequence of the Single Responsibility Principle; but it can also be seen as the or as the application of the “clarity of intent” rule in Kent Beck’s 4 Rules of Simple Design.

Now the interesting thing is that this design move applies not just to methods, but to classes, modules, services and applications! Whenever you see two things inside one, separate them. If you applied the right degree of coupling, you will end up with a system that is simpler.

Roles, stereotypes, kinds of objects

May 10th, 2013

I’ve been thinking lately about the ways that different authors explain OOP. I’ve drawn a little diagram that I would like to share with you.

The sources for this are, in no particular order:

Each of these authors has a perspective I find useful. I would love it if there was a comprehensive, single model, but so far I can’t see how to systematize them.

It gets weirder when you start collecting design principles… I might do a mind map of those some day. Somehow I can see how SOLID and GRASP (for instance) are compatible. They are pointing in the same general direction, but from different angles.