Archive for April, 2010

Antonio Ganci on design

Sunday, April 18th, 2010

My friend and former Sourcesense collegue Antonio Ganci posted an article (Italian) that describes exactly what I meant in A Frequently Asked TDD Question and its followup. It’s about what you can accomplish if you’re really good at design.

It’s in Italian, so I will report here in English the main points: Antonio is the main developer for an application. He had the most crazy change requests from his customers, like:

  • All user-visible text should be uppercase,
  • There should be an offline mode for working at home
  • Users with Vista or newer OS should see a WPF user interface, the others should get normal Windows Forms
  • Change rounding from 2 to 4 digits everywhere
  • Log every data modification

He reports that some of these changes were done in less than half an hour. Quite a feat, and Antonio seems proud of his design. I think he has good reasons to be proud!

Antonio does not say how he did that, but we can guess that the key is the Once and Only Once rule. If there’s a single place where you print/compute/store a number, it’s very easy to change precision. If you have some sort of builder to generate the user interface, it’s not terribly expensive to generate a WPF interface instead of a Windows Forms one.

Once and Only Once: there should be a single authoritative place in the code where each concept is represented. How do you get to OAOO? One part of the story is to remove duplication. Never write the same concept in two places, that is, keep DRY. The other part is to write expressive code: don’t write “a+b”, write what you mean by “a+b”: what does it mean in terms of the application? That’s the Once, and the Only Once.

Bravo Antonio.

On Jeff Sutherland’s “Shock therapy” paper

Tuesday, April 13th, 2010

I read the paper by Jeff Sutherland and others, Shock Therapy: A Bootstrap for a Hyper-Productive Scrum. There are some things I like in this paper, and some things that I don’t.

I like it that

  • The coach insists on a proper definition of done, holding the team accountable for the quality of the work.
  • The coach insists that only properly expressed stories are inserted in the team backlog. No vague stories, with no acceptance criteria.
  • The coach insists that Scrum training is done beforehand, and all parties involved attend, both business and developers.
  • The coach insists that all part of Scrum are implemented, with no “scrumbut.” The coach provides reasonable defaults for the (few) degrees of freedom of the Scrum framework.
  • The coach works by helping the team solve problems by themselves, with the goal of getting the team to a point where they don’t need the coach anymore.

I don’t like

  • The theme of forcefully enforcing the rules that pervades the paper. “Resistance is futile”… Bah.
  • Measuring team maturity with velocity alone. Velocity is a highly suspect measure. Velocity depends on the stories estimate numbers, which are decided by the team. There is a part of subjectivity in the estimates. Did they estimate *all* the stories before starting work? Did they change *any* of the estimates later?.
  • Velocity is also suspect because it bears no strict relation to return on investment. You might be very fast in developing software that does not give you a iota of profit. This does not seem to be a concern in this paper.
  • Thirdly, it’s very easy to be faster than what we did in the first iteration. In my experience, in the first iteration a lot of effort goes in building a “walking skeleton” of the system, learning about the problem domain and project technologies, and so on.
  • Fourth, it’s unclear what it means to compare “velocities” of different teams. Who did the estimates that are used to compare velocities? And how do you compare velocities with teams that are not even doing Scrum??
  • Then I have issues with how the paper says nonchalantly “ATDD was used” as if practicing ATDD was easy. In my experience, which is also what many trainers say, it takes at least two or three months to begin to be proficient with TDD, let alone ATDD. How much experience and training did the developers have with ATDD? Does the “shock therapy” work well even when the team members are new to TDD and other XP engineering practices?
  • No mention is made of the quality of the code produced by the teams. Was the high velocity bought at the expense of introducing technical debt? Was code quality even measured in any way? The paper does not say.

In conclusion, I found a few useful ideas in this paper, but I think it leaves a lot of questions unanswered. I have no problem believing that Jeff Sutherland can achieve very good results with his teams. I find the paper does not prove at all that there is a magic formula that guarantees high productivity.