Archive for the 'Agile' Category

SmallTalk-Inspired, Frameworkless, TDDed Todo-MVC

Tuesday, August 16th, 2016

TL;DR

I wrote an alternative to the Vanilla-JS example of todomvc.com. I show that you can drive the design from the tests, that you don’t need external libraries to write OO code in JS, and that simple is better :)

Why yet another Todo App?

A very interesting website is todomvc.com. Like the venerable www.csszengarden.com shows many different ways to render the same page with CSS, todomvc.com instead shows how to write the same application with different JavaScript frameworks, so that you can compare the strengths and weakness of various frameworks and styles.

Given the amount of variety and churn of JS frameworks, it is very good that you can see a small-sized complete example: not too big that it takes ages to understand, but not so small to seem trivial.

Any comparison, though, needs a frame of reference. A good one in this case would be writing the app with no frameworks at all. After all, if you can do a good job without frameworks, why incur the many costs of ownership of frameworks? So I looked at the vanillajs example provided, and found it lacking. My main gripe is that there is no clear “model” in this code. If this were real MVC, I would expect to find a TodoList that holds a collection of TodoItems; this sort of things. Alas, the only “model” provided in that example has the unfortunate name of “Model” and is not a model at all; it’s a collection of procedures that read and write from browser storage. So it’s not really a model because a real “model” should be a Platonic, infrastructure-free implementation of business logic.

There are other shortcomings to that implementation, including that the “view” has a “render” method that accepts the name of an operation to perform, making it way more procedural than I would like. This is so different to what I think of as MVC that made me want to try my hand at doing it better.

Caveats: I’m not a good JS programmer. I don’t know the language well, and I’m sure my code is clumsier than it could be. But I’m also sure that writing a frameworkless app is not a sign of clumsiness, ignorance or old age. Anybody can learn Angular, React or what have you. Learning frameworks is not difficult. What is difficult is to write good code, with or without frameworks. Learning to write good code without frameworks gives you incredible leverage: gone are the hours spent looking on StackOverflow for the magic incantations needed to make framework X do Y. Gone is the cost of maintenance inflicted on you by the framework developers, when they gingerly update the framework from version 3 to version 4. Gone is the tedium of downloading megabytes of compressed code from the server!

So what were my goals?

  • Simple design. This means: no frameworks! Really, frameworks are sad. Just write the code that your app needs, and write it well.
  • TDD: let the tests drive the design. I try to write tests that talk the language of the app specification, avoiding implementation details as much as possible
  • SmallTalk-inspired object orientation. JS generally pushes you to expose the state of objects as public properties. In SmallTalk, the internal state of an object is totally encapsulated. I emulated that with a simple trick that does not require extra libraries.
  • I had in the back of my mind the “count” example in Jill Nicola’s and Peter Coad’s OOP book. That is what I think of when I say “MVC”. I tried to avoid specifying this design directly in the tests, though.
  • Simple, readable code. You wil be the judge on that.

How did it go?

The first time around I tried to work in a “presenter-first” style. After a while, I gave up and started again from scratch. The code was ugly, and I felt that I was committing the classic TDD mistake, to force my preconceived design. So I started again and the second time was much nicer.

You cannot understand a software design process just by looking at the final result. It’s only by observing how the design evolved that you can see how the designer thinks. When I started again from scratch, my first tests looked like this:

beforeEach(function() {
  fixture = document.createElement('div');
  $ = function(selector) { return fixture.querySelector(selector); }
})

describe('an empty todo list', function() {
  it('returns an empty html list', function() {
    expect(new TodoListView([]).render()).to.equal('<ul class="todo-list"></ul>');
  });
});

describe('a list of one element', function() {
  it('renders as html', function() {
    fixture.innerHTML = new TodoListView(['Pippo']).render();
    expect($('ul.todo-list li label').textContent).equal('Pippo');
    expect($('ul.todo-list input.edit').value).equal('Pippo');
  });
});

The above tests are not particularly nice, but they are very concrete: the check that the view returns the expected HTML, with very few assumptions on the design. Note that the “model” in the beginning was just an array of strings.

The final version of those test does not change much on the surface, but the logic is different:

beforeEach(function() {
  fixture = createFakeDocument('<ul class="todo-list"></ul>');
  todoList = new TodoList();
  view = new TodoListView(todoList, fixture);
})

it('renders an empty todo list', function() {
  view.render();
  expect($('ul.todo-list').children.length).to.equal(0);
});

it('renders a list of one element', function() {
  todoList.push(aTodoItem('Pippo'));
  view.render();
  expect($('li label').textContent).equal('Pippo');
  expect($('input.edit').value).equal('Pippo');
});

The better solution, for me, was to pass the document to the view object, call its render() method, and check how the document was changed as a result. This places almost no constraints on how the view should do its work. This, to me, was key to letting the test drive the design. I was free to change and simplify my production code, as long as the correct code was being produced.

Of course, not all the tests check the DOM. We have many tests that check the model logic directly, such as

it('can contain one element', function() {
  todoList.push('pippo');

  expect(todoList.length).equal(1);
  expect(todoList.at(0).text()).equal('pippo');
});

Out of a total of 585 test LOCs, we have 32% dedicated to testing the models, 7% for testing repositories, 4% testing event utilities and 57% for testing the “view” objects.

How long did it take me?

I did not keep a scrupolous count of pomodoros, but since I committed very often I can estimate the time taken from my activity on Git. Assuming that every stretch of commits starts with about 15 minutes of work before the first commit in the stretch, it took me about 18 and a half hours of work to complete the second version, distributed over 7 days (see my calculations in this spreadsheet.) The first version, the one I discarded, took me about 6 and a half hours, over two days. That makes it 25 hours of total work.

What does it look like?

The initialization code is in index.html:

<script src="js/app.js"></script>
<script>
  var repository = new TodoMvcRepository(localStorage);
  var todoList = repository.restore();
  new TodoListView(todoList, document).render();
  new FooterView(todoList, document).render();
  new NewTodoView(todoList, document).render();
  new FilterByStatusView(todoList, document).render();
  new ClearCompletedView(todoList, document).render();
  new ToggleAllView(todoList, document).render();
  new FragmentRepository(localStorage, document).restore();
  todoList.subscribe(repository);
</script>

I like it. It creates a bunch of objects, and starts them. The very first action is to create a repository, and ask it to retrieve a TodoList model from browser storage. The FragmentRepository should perhaps better named FilterRepository. The todoList.subscribe(repository) makes the repository subscribe to the changes in the todoList model. This is how the model is saved whenever there’s a change.

Each of the “view” objects takes the model and the DOM document as parameters. As you will see, these “views” also perform the function of controllers. This is how they came out of the TDD process. They probably don’t conform exactly to MVC, but who cares, as long as they are small, understandable and testable?

Each of the “views” handles a particular UI detail: for instance, the ClearCompletedView is in js/app.js:

function ClearCompletedView(todoList, document) {
  todoList.subscribe(this);

  this.notify = function() {
    this.render();
  }

  this.render = function() {
    var button = document.querySelector('.clear-completed');
    button.style.display = (todoList.containsCompletedItems()) ? 'block' : 'none';
    button.onclick = function() {
      todoList.clearCompleted();
    }
  }
}

The above view subscribes itself to the todoList model, so that it can update the visibility of the button whenever the todoList changes, as the notify method will then be called.

The test code is in the test folder. For instance, the test for the ClearCompletedView above is:

describe('the view for the clear complete button', function() {
  var todoList, fakeDocument, view;

  beforeEach(function() {
    todoList = new TodoList();
    todoList.push('x', 'y', 'z');
    fakeDocument = createFakeDocument('<button class="clear-completed">Clear completed</button>');
    view = new ClearCompletedView(todoList, fakeDocument);
  })

  it('does not appear when there are no completed', function() {
    view.render();
    expectHidden($('.clear-completed'));
  });

  it('appears when there are any completed', function() {
    todoList.at(0).complete(true);
    view.render();
    expectVisible($('.clear-completed'));
  });

  it('reconsider status whenever the list changes', function() {
    todoList.at(1).complete(true);
    expectVisible($('.clear-completed'));
  });

  it('clears completed', function() {
    todoList.at(0).complete(true);
    $('.clear-completed').onclick();
    expect(todoList.length).equal(2);
  });

  function $(selector) { return fakeDocument.querySelector(selector); }
});

Things to note:

  • I use a real model here, not a fake. This gives me confidence that the view and the model work correctly together, and allows me to drive the development of the containsCompletedItems() method in TodoList. However, it does couple the view and the model tightly.
  • I use a simplified “document” here, that only contains the fragment of index.html that this view is concerned about. However, I’m testing with the real DOM in a real browser, using Karma. This gives me confidence that the view will interact correctly with the real browser DOM. The only downside is that the view knows about the “clear-completed” class name.
  • The click on the button is simulated by invoking the onclick handler.

If you are curious, here is the implementation of createFakeDocument:

function createFakeDocument(html) {
  var fakeDocument = document.createElement('div');
  fakeDocument.innerHTML = html;
  return fakeDocument;
}

It’s that simple to test JS objects against the real DOM.

All the production code is in file js/app.js. An example model is TodoItem:

function TodoItem(text, observer) {
  var complete = false;

  this.text = function() {
    return text;
  }

  this.isCompleted = function() {
    return complete;
  }

  this.complete = function(isComplete) {
    complete = isComplete;
    if (observer) observer.notify()
  }

  this.rename = function(newText) {
    if (text == newText)
      return;
    text = newText.trim();
    if (observer) observer.notify()
  }
}

As you can see, I used a very simple style of object-orientation. I do not use (or need here) prototype inheritance, but I do encapsulate object state well.

I’m not showing the TodoList model because it’s too long :(. I don’t like this, but I don’t have a good idea at this moment to make it smaller. Another class that’s too long and complex is TodoListView, with about 80 lines of code. I could probably break it down in TodoListView and TodoItemView, making it a composite view with a smaller view for each TodoItem. That would require creating and destroying the view dynamically. I don’t know if that would be a good idea; I haven’t tried it yet.

Comparison with other Todo-MVC examples

How does it compare to the other examples? There is no way I can read all of the examples, let alone understand them. However, there is a simple metric that I can use to compare my outcome: simple LOC, counting just the executable lines and omitting comments and blank lines. After all, if you use a framework, I expect you to write less code; otherwise, it seems to me that either the framework is not valuable, or that you can’t use it well, which means that it’s not valuable to you. This is the table of LOCs, computed with Cloc. (Caveat: I tried to exclude all framework and library code, but I’m not sure I did that correctly for all examples.) My version is the one labelled “vanillajs/xpmatteo” in bold. I’m excluding test code.

1204 typescript-angular/js
1185 ariatemplates/js
793 aurelia
790 socketstream
782 typescript-react/js
643 gwt/src
631 closure/js
597 dojo/js
594 puremvc/js
564 vanillajs/js
529 dijon/js
508 enyo_backbone/js
489 typescript-backbone/js
481 vanilla-es6/src
479 flight/app
475 lavaca_require/js
468 componentjs/app
432 duel/src/main
383 polymer/elements
364 cujo/app
346 sapui5/js
321 vanillajs/xpmatteo
317 scalajs-react/src/main/scala
311 backbone_marionette/js
310 ampersand/js
295 sammyjs/js
295 backbone_require/js
287 extjs_deftjs/js
284 durandal/js
280 rappidjs/app
276 thorax/js
271 troopjs_require/js
265 angular2/app
256 angularjs/js
249 mithril/js
242 thorax_lumbar/src
235 chaplin-brunch/app
233 vanilladart/web/dart
233 somajs_require/js
232 serenadejs/js
226 emberjs/todomvc/app
224 spine/js
224 exoskeleton/js
214 backbone/js
213 meteor
207 angular-dart/web
190 somajs/js
167 riotjs/js
164 react-alt/js
156 angularjs_require/js
147 ractive/js
146 olives/js
146 knockoutjs_require/js
145 canjs_require/js
139 atmajs/js
132 firebase-angular/js
130 foam/js
129 canjs/js
124 vue/js
99 knockback/js
98 react/js
96 angularjs-perf/js
34 react-backbone/js

Things I learned

It’s been fun and I learned a lot about JS and TDD. Many framework-based solutions are shorter than mine, and that’s to be expected. However, all you need to know to understand my code is JS.

TDD works best when you try to avoid pushing it to produce your preconceived design ideas. It’s much better when you follow the process: write tests that express business requirements, write the simplest code to make the tests pass, refactor to remove duplication.

Working in JS is fun; however, not all things can be tested nicely with the approach I used here. I often checked in the browser that the features I had test-driven were really working. Sometimes they didn’t, because I had forgot to change the “main” code in index.html to use the new feature. At one point I had an unwanted interaction between two event handlers: the handler for the onchange event fired when the edit text was changed by the onkeyup handler. I wasn’t able to write a good test for this, so I resorted to simply testing that the onkeyup handler removed the onchange handler before acting on the text. (This is not very good because it tests the implementation instead of the outcome.)

You can do a lot of work without jQuery, expecially since there is the querySelector API. However, in real work I would probably still use it, to improve cross-browser compatibility. It would probably also make my code simpler.

Pattern: Testable Screens

Tuesday, March 29th, 2016

When you are developing a complex application, be it web, mobile or whatever, it’s useful to be able to launch any screen immediately and independently from the rest of the system. By “screen” I mean a web page, an Android activity, a Swing component, or whatever it is called in the UI technology that you are using. For instance, in an ecommerce application, I would like to be able to immediately show the “thank you for your purchase” page, without going through logging in, adding an item to the cart and paying.

The benefits of this simple idea are many:

  1. You can easily demo user stories that are related to that screen
  2. You can quickly test UI changes
  3. You can debug things related to that page
  4. You can spike variations
  5. The design of the screen is cleaner and less expensive to maintain.

Unfortunately, teams are often not able to do this, because screens are tightly coupled to the rest of the application. For instance, in Javascript single-page applications, it would be good to be able to launch a view without having to start a server. Often this is not possible, because the view is tightly coupled to the Ajax code that gets the data from the server, that the view needs to function.

The way out of this problem is to decouple the screen from its data sources. In a web application, I would launch a screen by going to a debug page that allows me to set up some test data, and then launch the page. For instance:

Untitled 2

Note that the form starts pre-populated with default data, so that I can launch the desired screen with a single click.

Making screens decoupled from their data sources does, in my opinion, generally improve the design of the application. Making things more testable has a general positive impact on quality.

Bureaucratic tests

Monday, March 28th, 2016

The TDD cycle should be fast! We should be able to repeat the red-green-refactor cycle every few minutes. This means that we should work in very small steps. Kent Beck in fact is always talking about “baby steps.” So we should learn how to make progress towards our goal in very small steps, each one taking us a little bit further. Great! How do we do that?

Example 1: Testing that “it’s an object”

In the quest for “small steps”, I sometimes see recommendations that we write things like these:

it("should be an object", function() {
  assertThat(typeof chat.userController === 'object')
});

which, of course, we can pass by writing

chat.userController = {}

What is the next “baby step”?

it("should be a function", function() {
  assertThat(typeof chat.userController.login === 'function')
});

And, again, it’s very easy to make this pass.

chat.userController = { login: function() {} }

I think these are not the right kind of “baby steps”. These tests give us very little value.

Where is the value in a test? In my view, a test gives you two kinds of value:

  1. Verification value, where I get assurance that the code does what I expect. This is the tester’s perspective.
  2. Design feedback, where I get information on the quality of my design. And this is the programmers’s perspective.

I think that in the previous two tests, we didn’t get any verification value, as all we were checking is the behaviour of the typeof operator. And we didn’t get any design feedback either. We checked that we have an object with a method; this does not mean much, because any problem can be solved with objects and methods. It’s a bit like judging a book by checking that it contains written words. What matters is what the words mean. In the case of software, what matters is what the objects do.

Example 2: Testing UI structure

Another example: there are tutorials that suggest that we test an Android’s app UI with tests like this one:

public void testMessageGravity() throws Exception {
  TextView myMessage = 
    (TextView) getActivity().findViewById(R.id.myMessage);
  assertEquals(Gravity.CENTER, myMessage.getGravity());
}

Which, of course, can be made to pass by adding one line to a UI XML file:

<TextView
  android:id="@+id/myMessage"
  android:gravity="center"
/>

What have we learned from this test? Not much, I’m afraid.

Example 3: Testing a listener

This last example is sometimes seen in GUI/MVC code. We are developing a screen of some sort, and we try to make progress towards the goal of “when I click this button, something interesting happens.” So we write something like this:

@Test
public void buttonShouldBeConnectedToAction() {
    assertEquals(button.getActionListeners().length, 1);
    assertTrue(button.getActionListeners()[0] 
                 instanceof ActionThatDoesSomething);
}

Once again, this test does not give us much value.

Bureaucracy

The above tests are all examples of what Keith Braithwaithe calls “pseudo-TDD”:

  1. Think of a solution
  2. Imagine a bunch of classes and functions that you just know you’ll need to implement (1)
  3. Write some tests that assert the existence of (2)
  4. [… go read Keith’s article for the rest of his thoughts on the subject.]

In all of the above examples, we start by thinking of a line of production code that we want to write. Then we write a test that asserts that that line of code exists. This test does nothing but give us permission to write that line of code: it’s just bureaucracy!

Then we write the line of code, and the test passes. What have we accomplished? A false sense of progress; a false sense of “doing the right thing”. In the end, all we did was wasting time.

Sometimes I hear developers claim that they took longer to finish, because they had to write the tests. To me, this is nonsense: I write tests to go faster, not slower. Writing useless tests slows me down. If I feel that testing makes me slower, I should probably reconsider how I write those tests: I’m probably writing bureaucratic tests.

Valuable tests

Bureaucratic tests are about testing a bit of solution (that is, a bit of the implementation of a solution). Valuable test are about solving a little bit of the problem. Bureaucratic tests are usually testing structure; valuable tests are always about testing behaviour. The right way to do baby steps is to break down the problem in small bits (not the solution). If you want to do useful baby steps, start by writing a list of all the tests that you think you will need.

In Test-Driven Development: by Example, Kent Beck attacks the problem of implementing multi-currency money starting with this to-do list:

$5 + 10 CHF = $10 if rate is 2:1
$5 * 2 = $10

Note that these tests are nothing but small slices of the problem. In the course of developing the solution, many more tests are added to the list.

Now you are probably wonder what would I do, instead of the bureaucratic tests that I presented above. In each case, I would start with a simple example of what the software should do. What are the responsibilities of the userController? Start there. For instance:

it("logs in an existing user", function() {
  var user = { nickname: "pippo", password: "s3cr3t" }
  chat.userController.addUser user

  expect(chat.userController.login("pippo", "s3cr3t")).toBe(user)
});

In the case of the Android UI, I would probably test it by looking at it; the looks of the UI have no behaviour that I can test with logic. My test passes when the UI “looks OK”, and that I can only test by looking at it (see also Robert Martin’s opinion on when not to TDD). I suppose that some of it can be automated with snapshot testing, which is a variant of the “golden master” technique.

In the case of the GUI button listener, I would not test it directly. I would probably write an end-to-end test that proves that when I click the button, something interesting happens. I would probably also have more focused tests on the behaviour that is being invoked by the listener.

Conclusions

Breaking down a problem into baby steps means that we break in very small pieces the problem to solve, not the solution. Our tests should always speak about bits of the problem; that is, about things that the customer actually asked for. Sometimes we need to start by solving an arbitrarily simplified version of the original problem, like Kent Beck and Bill Wake do in this article I found enlightening; but it’s always about testing the problem, not the solution!

OOP is underrated

Monday, March 21st, 2016

I came recently upon a thread where Object-Oriented Programming was being questioned because of excessive complexity, ceremony, layers,… and because of the insistence of OOP of treating everything as an object, which some feel runs counter to most people’s intuition. Similar threads keep appearing, where OOP is being questioned and other approaches, like functional programming, are seen as a cure for the OOP “problem”.

My answer touches upon many points, and I wanted to share it with you.

Encapsulation is a key thing in OOP, and it’s just part of the larger context. Abstract Data Types also do encapsulation. OOP is more than that; the key idea is that OOP enables the building of a model of the problem you want to solve; and that model is reasoned about with the spatial, verbal and operational reasoning modes that we all use to solve everyday problems. In this sense OOP culture is strongly different from the ADT and formal math culture.

Math is very powerful. It enables to solve problems that by intuition alone you wouldn’t be able to solve easily. A good mathematical model can make simple what seems to be very complex. Think how Fourier transforms make it easy to reason about signals. Think how a little mathematical reasoning makes it easy to solve the Mutilated Chessboard problem. In fact, a good mathematical model can reduce the essential complexity of a problem. (You read right — reducing the essential complexity. I think that we are never sure what the essential complexity of a problem really is. There might always be another angle or an insight to be had that would make it simpler than what we thought it was. Think a parallel with Kolmogorov complexity: you never know what the K complexity of a string really is.)

However, mathematical reasoning is difficult and rare. If you can use it, then more power to you! My feeling is that many recent converts to FP fail to see the extent of the power of mathematical models and limit themselves to using FP as a fancy procedural language. But I digress.

My point is that if you want to reach the point of agile maturity where programming is no longer the bottleneck, and we deliver when the market is ready, not when we finally manage to finish coding (two stars on the Shore/Larsen model), building the right model of the problem is an essential ingredient. You should build a model that is captured as directly as possible in code. If you have a mathematical model, it’s probably a good fit for a functional programming language. However, we don’t always have a good mathematical model of our problems.

For many problems, we can more readily find spatial/verbal/operational intuitive models. When we describe an OO model with phrases like “This guy talks to that guy”, that is the sort of description that made Dijkstra fume with disdain! Yet this way of reasoning is simple, immediate and useful. Some kind of problems readily adapt themselves to be modeled this way. You may think of it as programming a simulation of the problem. It leverages a part of our brain that (unlike mathematical reasoning) we all use all the time. These operational models, while arguably less powerful than mathematical models, are easier reason about and to communicate.

Coming to the perception of the excessive “ceremony” and “layers” of OOP, I have two points:

  1. Most “OOP” that we see is not OOP at all. Most programs are conceived starting with the data schema. That’s the opposite of OOP! If you want to do OOP, you start with the behaviour that is visible from the outside of your system, not with the data that lie within it. It’s too bad that most programming culture is so deeply data-oriented that we don’t even realise this. It’s too bad that a lot of framework and tooling imply and push towards data centric: think JPA and Rails-style ActiveRecord, where you design your “object” model as a tightly-coupled copy of a data model.
  2. Are we practicing XP? When we practice XP, we introduce stuff gradually as we need. An abstraction is introduced when there is a concrete need. A layer is introduced when it is shown to simplify the existing program. When we introduce layers and framework upfront (anybody here do a “framework selection meeting” before starting coding? :-) ) we usually end up with extra complexity. But that’s nothing new: XP has been teaching us to avoid upfront design for a long time.

For more on how OOP thinking differs from formalist thinking, see Object Thinking by David West. I also did a presentation on this subject (video in Italian).

The time that really matters

Thursday, January 14th, 2016

TL;DR: write apps that start from the command line within one second.

One of the core principles of Lean is waste reduction. There are many kinds of waste: for instance, overproduction means to write software that is not needed, or that is “gold-plated”, that is, done to a level of completeness that exceed the customer’s needs.

Here I would like to talk about another kind of waste: waiting.

Picture this: you are a programmer. You arrived at work this morning. You attended the stand-up meeting. You attended an ad-hoc technical meeting. You had a coffee break with your collegues. You pick up a user story from the card wall and you discuss its details with the product owner. You finally sit down in front of the keyboard, pairing with a fellow programmer. Now your core time starts: the time when you program; the time when are really doing value-adding work. You start by checking the screen of the application where you will have to add functionality: so you start the application…. and wait. And wait. And wait.

Your application may be able to run thousands of transactions per second, yet it takes minutes to boot.

The time that your application takes to boot is a tax that you keep paying, tens or hundreds of times per day. This waiting time cuts into your best time: the time when you are in front of the keyboard, well-rested, ready to do your best work. This tax is being paid by all the programmers in the team, by all the testers and by all those who need to redeploy the application.

I hear you saying: “But, but, but… I do TDD! I don’t need to boot my app that often!”. OK, it’s very good that you do TDD. If you do it well, which means that you will mostly do microtests instead of integrated tests, then you will not have to reboot your application all the time. And yet… there are times when we really need to reboot the app. We will have to write at least some integrated tests. Sometimes we will have to debug. Sometimes we will have to test the thing manually. Sometimes we are tweaking the UI, and it makes little sense to write tests for that. Sometimes, despite our best intentions, we don’t find a way to do TDD well. For all of these reasons, it really pays to have an application that can be started within one second.

My favourite way to implement a web application in Java is to use an embedded Jetty server. This is a technique that I’ve been teaching for years. You may see an example in the github repository for my Simple Design workshop. Running the program is simply a matter of executing

./gradlew compileJava && script/run.sh

which takes about a second. If you run it from Eclipse in debug mode, it reloads automatically any change you make.

Compare this to a program written with Spring Boot. Let’s consider the Getting Started example application. You compile and run it with

./gradlew build && java -jar build/libs/gs-spring-boot-0.1.0.jar

and it will run in 6.8 seconds (measured by hand with a stopwatch on my 8-cores Mid-2015 MacBook Pro). This is significantly worse than 1 second, but still tolerable. The problem is that by design, Spring looks for components in its classpath. This autoconfiguration takes an increasing amount of time as the application grows in size. Real-world services written with Spring boot take 20-30 seconds to boot. That’s definitely too much in my view… expecially when it’s easy to stay under one second.

That’s whay I don’t like to autoconfigure stuff: I just configure your components explicitly in my main partition.

Protect your core time: make it so that your app can be restarted within one second. That’s a real productivity gain for the whole team.

References

Uncle Bob blogged about this very subject in The Mode B imperative

Greg Young describes how and why to configure components explicitly in his talk 8 Lines of Code

The Semaphores Kata

Tuesday, December 23rd, 2014

This is an exercise to explore how TDD relates to graphical user interfaces. And also how to work with time. And how to obtain complex behaviour by composition of simpler behaviour.

It is inspired by an exercise presented in the book ATDD By Example by Markus Gärtner.

First step

We want an app that shows a working semaphore, with the three usual lights red, green and amber. The semaphore works with the following cycle:

  • Initially only the red light is on.
  • After 60 seconds, the red light goes off, the green light is turned on.
  • After 30 seconds, the amber light is turned on.
  • After 10 seconds, both amber and green go off, and red is turned on.
  • And again and again…

This can be done in Java with a Swing user interface, or in Javascript with an HTML user interface.

Demo: you should show the GUI with the lights turning on and off. You may speed up the tempo just to make the demo less boring :-)

Second step

We must now handle a crossing with four semaphores, like this:

            o
            o (B0)
            o


o                        o
o (A0)                   o (A1)
o                        o

            o
            o (B1)
            o

We have four semaphores A0, A1, B0, B1. A0 and A1 must always show the same lights. B0 and B1 must always show the same lights. B0’s cycle is delayed by 50 seconds with respect to A0. As a consequence, there should NEVER be a green light on all four semaphores! And there should be a safety 10 seconds interval when all four semaphores show red. The following diagram shows what the semaphores should show.

Every letter represents 10 seconds

     time:     ----------->
A0 and A1:  RRRRRRGGGARRRRRRGGGA
B0 and B1:  RGGGARRRRRRGGGARRRRR

R = Red light
G = Green light
A = Green + Amber light

For the instructor

How to test a GUI? (Hint: you don’t; you apply model-view separation and move all of the logic to the model. You should read the “Humble Dialog Box” paper.) There should be a “Semaphore” domain object.

How to test the passing of time? (Hint: the most productive way is to assume that the app will receive a “tick” message every second. This is also an instance of model-view separation; the “tick” message is sent by a clock. This is just the same as if there was a user clicking on a button that advances the simulation by one second.)

How do participants demo the application? Insist on seeing the application work for real. A demo that consists of showing unit tests passing is NOT satisfactory. Try to make developers use both unit tests and manual tests. Insist on concrete, demoable progress.

The goal of the second step is to check that the developers use two (or four) instances of the Semaphore object from the first step, instead of making a big, monolithic “two-way semaphore” that controls all of the lights.

Mathematics cannot prove the absence of bugs

Thursday, December 18th, 2014

Everyone is familiar with Edsger W. Dijkstra’s famous observation that “Program testing can be used to show the presence of bugs, but never to show their absence!” It seems to me that mathematics cannot prove the absence of bugs either.

Consider this simple line of code, that appears in some form in *every* business application:

database.server.address=10.1.2.3

This is a line of software. It’s an assignment of a value to a variable. And there is no mathematical way to prove that this line is correct. This line is correct only IF the given IP address is really the address of the database server that we require.

Not even TDD and unit testing can help to prove that it’s correct. What would a unit test look like?

assertEquals("10.1.2.3", 
    config.getProperty("database.server.address"));

This test is just repeating the contents of the configuration file. It will pass even if the address 10.1.2.3 is wrong.

So what is the *only* way to prove that this line of code is correct? You guessed it. You must run the application and see if it works. This test can be manual or automated, but still we need a *test* of the live system to make sure that that line of code is correct.

Another example of Object Theater

Thursday, October 30th, 2014

Mastermind

This year I gave my Web Applications students the task of writing a Mastermind game. You can see some of their work here, here and here. The game works like this: the computer starts a new game by inventing a random secret code, composed of 4 digits from 1 to 6. For instance: 5414.

The player must deduce what is the secret code by trying guesses. To continue the example if my first guess is 1234, the computer will answer “-+” which means that I got one number right (+) and another number is present in the secret code but in a different position (-). Of course, I don’t know which! I can then try more guesses, until I have enough clues to guess right.

You get a score for each completed game, that is equal to the number of guesses. Of course, the lower the score, the better.

A player will earn a score that is the average of all the games he or she completed.

Procedural!

Teaching Object-Oriented programming was not one of the goals of the course. Therefore, most programs I got were very procedural. (Please note that the example that follows is from a good student and his work earned a very high score in my course. It was a good web application, even though it was not object-oriented).

I’d like to show an example of procedural code. This is a controller object that handles a “guess”.

public void guess() throws IOException {
  String gameId = request.getParameter("game_id");
  String code = gamesRepository.findSecretCode(gameId);
  String guess = request.getParameter("guess");
  String answer = compareCodes(code, guess);
  String player = gamesRepository.find_game_player(gameId);
  guessesRepository.createGuess(gameId, player, guess, answer);
  gamesRepository.incrementGameScore(gameId);

  // if game is won
  if(answer.equals("++++")){
    gamesRepository.setFinished(request.getParameter("game_id"));
    int oldScore = gamesRepository.getPlayerScore(player);
    playersRepository.addFinishedGame(player, oldScore);
  }

  // return json data
  response.getWriter().write(toJson("answer", answer));
}

This is classic procedural code; the game logic is found in the controller (method compareCodes()) and the repositories (three repositories!). There are no domain objects.

The database structure is something like

       1      *      1      *
player -------- game -------- guesses

The gameRepository adds a row to the games table when a new game is added. The guessesRepository adds a row to the guesses table when a new guess is guessed. The controller must take care to call the proper repositories at the right time. The controller “knows” the database structure; if the database structure changes, the controller code will probably also change.

Object-Oriented

What I’d like to do instead is

  • Domain objects that handle all the game logic
  • No logic in the repositories or the controller
  • Just one repository is enough, thank you. The repository should take care of adding rows to the proper tables.

The domain object should probably be the MasterMind game.

game.guess(request.getParameter("guess"));

The “guess” message is what sets the domain logic in motion. The controller should not need to know anything else.

Q. What if the game is won? Who updates the player’s score?

A. The game object should do that.

Q. Where do we see that the game updates the player’s score?

A. Not in the controller. The controller does not know or care. Handling victories is something that is done in the game object.

Q. Really! How do we update the player’s score?

A. The game probably knows its player, and tells it to update its score if the game is won.

Q. How does the game get a reference to its player?

A. The controller does not know. But see next question.

Q. Where do we get the game object from?

A. From a repository, of course. We suppose that the repository will return it with all the dependencies that it needs to have. If the game needs a reference to its player, the repository must take care to set it up.

String gameId = request.getParameter("game_id");
Game game = gamesRepository.findGame(gameId);

game.guess(request.getParameter("guess"));

Q. How is the state of the game persisted?

A. By asking the repository to save the game.

// Here we are in the infrastructure world
String gameId = request.getParameter("game_id");
Game game = gamesRepository.findGame(gameId);

// Now we pass to the realm of pure domain logic
game.guess(request.getParameter("guess"));

// And now we return to the infrastructure world
gamesRepository.save(game);
response.getWriter().writer(toJson(game));

So we have seen another example of object theater. The infrastructure details are dealt with before and after the main action. The main action is sending “guess” to the game object. There is where the functional requirements is dealt with. Before that, and after that, is infrastructure code that deals with non-functional, performance requirements.

Object-Oriented Theater

Thursday, October 30th, 2014

Update 30/10/14: I first read about the “Object Theatre” in a message by Anthony Green on the GOOS mailing list; I didn’t remember it consciously when I wrote this post, but it certainly has been working in my brain ever since. The “Object Theatre” metaphor is his invention, not mine. Thanks Kevin and Anthony for pointing it out.

Yesterday evening I attended a good introduction on functional programming by Mario Fusco. One of his slides illustrated a well-known principle of functional programming. There are pure functions, and “functions” with side effects. Good FP style suggests to keep non-pure functions at the edges of the program, so that the inner core of your program only contains pure, mathematical functions.

He showed this picture

http://www.slideshare.net/mariofusco/if-you-think-you-can-stay-away-from-functional-programming-you-are-wrong/41

In fact, a very similar picture can be drawn for good object-oriented style. In good OO style, you want to separate your domain objects from infrastructure objects. The domain objects contain just domain logic, they execute in memory and have no references to file system, databases or network services. They can be mutable, of course! But they are “pure logic” in the sense that they live in a platonic world where we are only concerned with the functional behaviour of our programs.

Infrastructure objects, on the other hand, deal with everything else: user interface, databases, file systems, web services… All the things that are needed to connect our platonic world of objects to the outside world.

So what’s good OO style in this context? In my opinion, it’s good to keep the infrastructure objects in an outside shell, while the inner core of the program contains only pure domain objects. Let me give you an example.

Suppose you have a Counter object that needs (for non-functional reasons!) to be made persistent. The functional logic is about incrementing and decrementing the value of the counter. The infrastructure logic is about making sure that the counter retains its value across reboots (which is definitely a non-functional requirement.)

The wrong way to do this is

// bad style! don't do this
class Counter {
  public Counter(int id, CounterDao dao) {
    this.id = id;
    this.dao = dao;
  }

  public void increment() {
    value++;
    dao.incrementCounter(id);
  }

  private int value = 0;
  private int id;
  private CounterDao dao;
}

The usage of this counter would be

CounterDao dao = ...;
Counter counter = new Counter(123, dao);

// here we perform logic and also persist the state
counter.increment();

The above example is bad style, because it mixes persistency logic with functional logic. Bah! A better way to do it is:

class Counter {
  public void increment() {
    value++;
  }
  private int value = 0;
}

See? Only pure logic. We don’t care about persistency there. We could use the counter this way:

// we start this use case in the world of infrastructure
CounterDao dao = ...;
Counter counter = dao.findCounter(id);

// here we enter the world of pure logic
counter.increment();

// here we return to the world of infrastructure
dao.save(counter);

I like to call this structure “object theatre”. Imagine your domain objects as actors in a play. You want to setup a scene where your actors are set up in a certain way: Arlecchino talks to Colombina, Colombina has a fan in her hand, etc. When the scene starts, the actors perform each according to their character. When the scene ends, we lower the curtain.

I imagine that an object-oriented system works the same way. When a request arrives, we set up the scene by retrieving all the proper objects from various repositories and we connect them appropriately. This is setting the scene. Then we send a message to one object that sets some logic in motion. The objects send messages to each other, carrying out a computation. This is playing out the scene. When the domain objects are done, we conclude by returning the objects to their respective repositories. This is lowering the curtain.

A summary of my XP2014

Monday, June 2nd, 2014

TL;DR: The conference was pleasant, energetic and informative. The organizers did an excellent job of coordinating a very large program. The city, of course, is awesome. I met old and new friends, and I learned a lot.

I had the chance to present my two sessions to some great people. Just 5-7 people partecipating in each session, but they were the right people.

I attended an introduction to Continuous Deployment by Luca Minudel. Luca presented in a very clear and concise way what CD is and what are its main components. From this session, and another one by Seb Rose, I see that the idea of “CD pipelines” has now become common.

I attended the keynote by Pekka Abrahamsson. Pekka jokes that while working in his academic ivory tower, his group managed to create innovative products with undergraduate students. I was impressed by an experimental system for getting rid of a dangerous parasyte in beehives.

I attended the keynote by Robert Martin. I actually had a chance to shake hands with the man. Martin’s thesis is that we need to do something to improve the standards of quality in our profession, before some ugly software-related incident forces the regulators to do that to us. For Martin, the best chance for us to do that is to apply the practices of Extreme Programming. XP was invented by Kent Beck “to heal the divide between business people and programmers.” In recent times, Martin sees that the Agile Movement is increasingly concerned with project management, and less and less interested in the programming practices. This drove away many programmers from Agile, who then formed the Software Craftsmanship movement. And this is a massive #fail: the divide is back. XP still has the potential to heal this divide. By focusing on the practices, not just the principles or the values, XP helps programmers achieve better results more cheaply. But you must stress the practices. As Uncle Bob puts it, “do we do the practices? Or do we ‘let the team decide’”? The team, says Martin, has an agenda. If the team refuses to do a practice, he says, it’s because the team has something to hide.

I attended the keynote by Joshua Kerievsky. Joshua presented his view of “safety at work”, which is both metaphoric (as in “I caused a small injury to my customer by exposing him or her to a bug”) and literal. I can see how seeing things from the point of view of “safety” leads, in part, to the same kind of process improvements that you would do with the goal of improving quality: find the root cause of defects and remove them, and so on. But safety is more general; it’s not just protecting money. As you can see on Industrial Logic’s website, it’s a more general thing. I think I like Joshua’s idea. It resonates with the teachings of Tom Gilb, who is always stressing good engineering. Excellent engineering is always keen on safety at work. Joshua quoted the engineer who built the Golden Gate Bridge, with far less fatalities than were expected by the statistics of the time. I add that the same is true of Filippo Brunelleschi, the engineer who built the dome of the Florence Cathedral.

On a side note, I was finally convinced to try Industrial Logic’s elearning resources. I can’t wait to have time to get into it.

Another note: someone asked Joshua what he made of the current debate on “TDD is Dead”. Joshua joked that his company celebrated the funeral of TDD by issuing a 50% discount on their TDD elearning modules. It was so successful that some of his customers asked if BDD was also going to die soon :-) He said that by and large DHH is grossly misinformed on TDD, even though he has a few good points.

I attended a session by Steve Holyer and Nancy Van Schooenderwoert, on how a coach can maximize his or her chances of success before accepting a coaching engagement. The main thing I got from the session is a better understanding of Jerry Weinberg’s principle of Organizational Addiction. I had read Quality Software Management Vol. 3, but I never got round to think how to use the addiction model as a tool for making sense of what happens at work, and what to do about it.

I attended a session by Rebecca Wirfs-Brock on how to deal with complex requirements. This session was going a bit too fast for me :) I can’t say I was able to follow the exercise, but it gave something to chew on later.

I attended an interesting prentation by Seb Rose on a tool for assessing the quality of a suite of tests by introducing bugs and seeing if the tests detect them.

I partecipated in a very interesting game by Michele Finelli on Alerting, Logging and Monitoring. I was later given a private lesson by Michele on this very subject. I think Michele should write something on the subject; most developers, including me before Michele’s lesson, have very foggy notions on this subject.

I noticed that some people were wearing tee shirts advertising XP2015 in Finland! And at dinner on Tuesday I learned that they had already decided the location for XP2016! Me and my friends organizers for the Italian Agile Day should learn something from this :)