Archive for the 'Agile' Category

Design problem #2

Monday, June 14th, 2010

This is a subset of the Back to the Checkout kata by Dave Thomas, which I used many times as a TDD training exercise.

Suppose you have a PriceRules object that knows that the prices of items. Its responsibility is to know the following table:

Item Unit Price Special Price
A 50 3 for 130
B 30 2 for 45
C 20
D 15

Then you have a Cart object that knows which items a customer is trying to buy. For instance, a given cart could contain the list [A, A, C, A, D].

The problem is to compute the total that the customer has to pay, out of a collaboration between (at least) the cart and the pricerRules objects. It seems easy, but there is a catch: you are forbidden to use getters. All methods must return “void”, in Java terms. Design the messages that are exchanged between the objects and produce the desired result. (In the example, it would be 165). Have fun!

I will be at XP2010 Reloaded

Sunday, June 13th, 2010

Bruno Rossi of the Bozen University will organize a followup to XP2010 on September 15 this year. I think this is a great opportunity for those who could not attend XP2010 in Norway. This conference will be much smaller and affordable.

I will submit something about Extreme Programming engineering techniques; nothing terribly new, just sharing some of my current understanding of these techniques.

I hope I will see you in Bolzano!

Software Design problems, anyone?

Sunday, June 13th, 2010

You learn math by solving problems. Problems frame the way you learn, give you a tangible proof that you’re progressing, give you a sense of meaning and achievement. How do you learn physics? By studying the books of course, but then solving physics problems is very important. How do you learn to play deep games such as chess or go? By playing, mostly. And then by pondering and solving problems. Electronics? Chemistry? Building science? Genetics? The books on these subjects are full of problems.

How do you learn good software design? I don’t know. The books that I’ve read explain principles, and provide examples. Rarely I’ve seen books that contain problems, exercises, or challenges. (Notable exceptions: William Wake’s Refactoring Workbook and Ka Iok Tong’s Essential Skills for Agile Development.)

I propose that we assemble a collection of problems meant to develop and discuss software design. A good problem for this goal would problem *not* have a single correct answer, for design and engineering are always a matter of compromises. A good problem should be a means to discuss the various choices and tradeoff, and worse and better ways to solve it. A good problem should be a small framework.

Let’s start! Here is a problem that I find interesting. The good old Fizz-Buzz problem goes like this:

Write a program that prints the numbers in order from 1 to 100, with the exception that when a number is a multiple of 3, it prints “Fizz”. When a number is a multiple of 5, it prints “Buzz”. And when a number is multiple of both, it prints “FizzBuzz”. In all other cases, it just prints the decimal representation of the number.

There is an obvious way to solve this exercise, of course. It’s a very simple problem, from the point of view of programming. I would have the student solve it however they like. Most solutions contain a 3-way IF. I would then ask students to remove duplication. Early XP books were strong on removing duplication, for a good reason. It takes a bit of training to see how much duplication can creep in even such a small bit of programming.

The usual objection I get at this point is that it makes no sense to go this deep in removing duplication for such a small and trivial example. They also will say that the 3-IFs version is more readable than any version where duplication is removed. This is the crux of the matter.

I then continue the exercise by adding the requirement that

For multiples of 7, the program prints “Bang”.

Easy, they say. Add a fourth IF. Not so fast, I say :-)

For multiples of 7 and 3, the program prints “FizzBang”. For multiples of 5 and 7, the program prints “BuzzBang”. For multiples of 3, 5, 7, the program prints “FizzBuzzBang”!

Now we have an exploding number of IFs. If the next requirement is of the same sort as this one, we see how the IF-chain solution becomes untenable :-) Now solve this!

Update:

  • I got the idea of using FizzBuzz as a design example from Giordano Scalzo, who presented it at the Milano XPUG and posted a solution on slideshare
  • Other sources of problems, in no particular order: the Refactoring to patterns book by Joshua Kerievsky. The list of katas by Dave Thomas. The Refactoring in Ruby book by William Wake and Mike Rutherford. The Ruby Quiz site. I’m not merely looking for programming problems. I’m looking for design problems. The difference is that I don’t just want a problem that requires a correct or efficient solution. I want a problem that requires a solution that is easy to understand and change.

How true

Tuesday, June 8th, 2010

I read this interview by Bill Wake to an Ukrainian coach, and he raises a very important point that I find it difficult to get right :-)

WW – I’m interested in things you do, behaviors you have, whether you think of them as coaching or programming.

AK – I was at a position in my life where I stopped being a traditional team lead and turned myself into a ScrumMaster. It’s hard to do, because you get used to proposing your ideas and believing that you’re smarter than other people. Instead you have to start believing that the team can make up better ideas themselves. You make yourself stop talking and just ask questions. This is a hard thing to achieve.

Alexey Krivitsky, interviewed by William Wake

Emmanuel Gaillot on Software Craftmanship

Tuesday, May 25th, 2010

Excellent post by Emmanuel. My favourite quote:

I envision a future in which programmers are the conscious repositories of a body of knowledge. A future in which they regain their craft, instead of tweaking frameworks they don’t understand. A future, eventually, in which programmers say “no” to demands at odds with their ethics.

Antonio Ganci on design

Sunday, April 18th, 2010

My friend and former Sourcesense collegue Antonio Ganci posted an article (Italian) that describes exactly what I meant in A Frequently Asked TDD Question and its followup. It’s about what you can accomplish if you’re really good at design.

It’s in Italian, so I will report here in English the main points: Antonio is the main developer for an application. He had the most crazy change requests from his customers, like:

  • All user-visible text should be uppercase,
  • There should be an offline mode for working at home
  • Users with Vista or newer OS should see a WPF user interface, the others should get normal Windows Forms
  • Change rounding from 2 to 4 digits everywhere
  • Log every data modification

He reports that some of these changes were done in less than half an hour. Quite a feat, and Antonio seems proud of his design. I think he has good reasons to be proud!

Antonio does not say how he did that, but we can guess that the key is the Once and Only Once rule. If there’s a single place where you print/compute/store a number, it’s very easy to change precision. If you have some sort of builder to generate the user interface, it’s not terribly expensive to generate a WPF interface instead of a Windows Forms one.

Once and Only Once: there should be a single authoritative place in the code where each concept is represented. How do you get to OAOO? One part of the story is to remove duplication. Never write the same concept in two places, that is, keep DRY. The other part is to write expressive code: don’t write “a+b”, write what you mean by “a+b”: what does it mean in terms of the application? That’s the Once, and the Only Once.

Bravo Antonio.

On Jeff Sutherland’s “Shock therapy” paper

Tuesday, April 13th, 2010

I read the paper by Jeff Sutherland and others, Shock Therapy: A Bootstrap for a Hyper-Productive Scrum. There are some things I like in this paper, and some things that I don’t.

I like it that

  • The coach insists on a proper definition of done, holding the team accountable for the quality of the work.
  • The coach insists that only properly expressed stories are inserted in the team backlog. No vague stories, with no acceptance criteria.
  • The coach insists that Scrum training is done beforehand, and all parties involved attend, both business and developers.
  • The coach insists that all part of Scrum are implemented, with no “scrumbut.” The coach provides reasonable defaults for the (few) degrees of freedom of the Scrum framework.
  • The coach works by helping the team solve problems by themselves, with the goal of getting the team to a point where they don’t need the coach anymore.

I don’t like

  • The theme of forcefully enforcing the rules that pervades the paper. “Resistance is futile”… Bah.
  • Measuring team maturity with velocity alone. Velocity is a highly suspect measure. Velocity depends on the stories estimate numbers, which are decided by the team. There is a part of subjectivity in the estimates. Did they estimate *all* the stories before starting work? Did they change *any* of the estimates later?.
  • Velocity is also suspect because it bears no strict relation to return on investment. You might be very fast in developing software that does not give you a iota of profit. This does not seem to be a concern in this paper.
  • Thirdly, it’s very easy to be faster than what we did in the first iteration. In my experience, in the first iteration a lot of effort goes in building a “walking skeleton” of the system, learning about the problem domain and project technologies, and so on.
  • Fourth, it’s unclear what it means to compare “velocities” of different teams. Who did the estimates that are used to compare velocities? And how do you compare velocities with teams that are not even doing Scrum??
  • Then I have issues with how the paper says nonchalantly “ATDD was used” as if practicing ATDD was easy. In my experience, which is also what many trainers say, it takes at least two or three months to begin to be proficient with TDD, let alone ATDD. How much experience and training did the developers have with ATDD? Does the “shock therapy” work well even when the team members are new to TDD and other XP engineering practices?
  • No mention is made of the quality of the code produced by the teams. Was the high velocity bought at the expense of introducing technical debt? Was code quality even measured in any way? The paper does not say.

In conclusion, I found a few useful ideas in this paper, but I think it leaves a lot of questions unanswered. I have no problem believing that Jeff Sutherland can achieve very good results with his teams. I find the paper does not prove at all that there is a magic formula that guarantees high productivity.

Answering Luca’s comment

Saturday, March 20th, 2010

Luca Minudel commented on my previous post. My answer has got so long that it became a full post.

Hi Luca,

I’m a fan of the Growing Object-Oriented Software book, but I haven’t adopted the mockist style yet. I know that I have seen lots of instances of mock misuse, leading to unreadable and brittle tests. So I’m afraid it depends a lot on who’s doing the training :-) It’s true that the GOOS book promotes good object-orientation.

My point in my previous post was not advocating a particular style of training. It is simply to answer a common question. “If we work incrementally without design upfront, as all the books on TDD advocate, is there a risk of getting in a situation where adding some features becomes unreasonably expensive?” This fear is often related to non-functional requirements.

I think this fear has merit, and it’s too easy for new converts to TDD (including me) to confidently say “we have the tests, we can refactor the code to make it do whatever is needed.” The bug in this reasoning is that we should not code with the expectation to do (what often amounts to) major refactoring. We should code with the objective of producing a system where implementing new stuff is so easy that it feels like composing Lego bricks, or like putting the last piece of a puzzle, when it slides comfortably in its place.

Getting to a design that is this good is the real goal of TDD and XP. I can’t say in all honesty that I can achieve this level of goodness in major applications. But I’m beginning to see how one could do that.

So the point of my post is that you can afford to work in a totally incremental way, only if you are dead serious about keeping your code squeaky clean. Which is not easy to do, given the time pressures we always have. But then again, this is nothing new, this is what the XP books have been saying from the beginning, isn’t it? Only it is surprising to discover how really clean your code should be.

Coming back to your comment, it seems you are concerned with ways to help a team to do the right thing and produce good code. My experience is that the first step is to know your material well. So if you’re accomplished producing good code in the mockist style, more power to you! Your team will pick it up from you.

It’s interesting that you found it easier to communicate good style using the mockist approach rather than teaching principles like OCP and Demeter. I’ll have to think about it.

Dear reader: if you haven’t read Growing Object-Oriented Software yet, I invite you to do so. It’s not a book about “mocks” simply. It’s a book about a style of software development, deeply and beautifully object-oriented. For a taste of the authors’ style, I suggest you to read their paper Mock Roles, Not Objects (pdf).

A frequently asked TDD question

Thursday, March 18th, 2010

Today I and Tommaso facilitated the kickoff of a new project for a new team. One question that came up was:

We know that before the project is finished, we will have to profile all buttons and links so that users will not see options they are not authorized to use. Should we keep this in mind as we write the code today? Or should we rigidly adhere to YAGNI and defer all work on profiling until the profiling story is chosen by our customer?

This question comes up quite often. An answer that is often heard by agilists is

Never! Today you should only work for today’s story! The profiling story might never be chosen, after all, and even if it is chosen, you will refactor your code to accomodate the new functionality. YAGNI, my friend!

Tommaso rightly objected that this point of view is wrong. When the moment of link profilation comes up, the application is usually at a late stage of development. Refactoring all links and buttons to include profilation takes a lot of work. And we knew this functionality was coming, so we can’t even invoke the “customer changing their mind” excuse.

Agile development can only work if we can keep down the cost of adding functionality even late in the development cycle. This should apply to unforeseen changes, and even more so to features that we always knew were needed. If a change requires major reworking, then we clearly did something wrong.

What do I suggest then? Should we build in infrastructure right at the beginning for all major features? Should we do design upfront to protect us from this sort of mistakes?

Well, there is not a clear-cut answer. Naive reading of the TDD book will lead you to believe that just applying the TDD mantra

  1. Red—Write a little test that doesn’t work, and perhaps doesn’t even compile at first.
  2. Green—Make the test work quickly, committing whatever sins necessary in the process.
  3. Refactor—Eliminate all of the duplication created in merely getting the test to work.

Kent Beck, Test Driven Development by Example

will lead you slowly but surely, almost automatically, to a well-written system that makes it easy to add changes at all times. Well, there is a huge misunderstanding here. There is nothing “almost automatic” in this process. If you read carefully the book, it will also say things like “Our designs must consist of many highly cohesive, loosely coupled components”.

Just how highly cohesive and how loosely coupled our systems must be, is something that is left to the judgment of the reader. Which in most cases is not skilled enough to imagine just how cohesive and decoupled people like Kent Beck think when they say “cohesive and decoupled”. Most people don’t understand how serious people like Kent Beck are when they say “eliminate all of the duplication”.

Because if you’re not fanatical about removing duplication, about striving for code that is highly cohesive and decoupled, there is little chance of achieving code that is easy to change over time.

My answer is that if I am serious about removing duplication, then I will *not* have code like <a href='foo'>bar</a> written in more than one place; let alone in hundreds of places. I will have a single function, a single method, a single place where the html link is generated. Then when the time comes to add profilation, or HTTPS, or Ajax, or anything else that affects how my links should work, then I will have a *single* place to change. The change will not be too expensive.

If I don’t think I can work with the level of discipline it takes to really remove duplication, then perhaps I’ll be better off stopping pretending I’m really doing TDD, and start doing some design upfront.

Lazy proxy in Ruby

Thursday, March 11th, 2010

I’m a total newbie when it comes to Ruby evaluation tricks, so when I learned this today I felt it was a good thing to share :-)

The problem: speeding up a Rails application. When all is said and done, you need to cache page fragments in order to speed up an application significantly. For instance: you start with

class ProductsController < ApplicationController
  def category
    @products = Product.find_by_category(params[:id])
  end
end

...

<div id="products">
  <% for product in @products do %>
    <!-- some complicated html code -->
  <% end %>
</div>

and then add fragment caching in the view with

<% cache "category-#{params[:id]}" do %>
  <div id="products">
    <% for product in @products do %>
      <!-- some complicated html code -->
    <% end %>
  </div>
<% end %>

OK, this speeds up view rendering. But we are still executing the query in the controller, to obtain a list of products we are not even using. The standard Rails solution to this is

  class ProductsController < ApplicationController
    def category
      unless fragment_exist? "category-#{params[:id]}"
        @products = Product.find_by_category(params[:id])
      end
    end
  end

This is nice enough. But one things is worrying me, is there might be a race condition between the “unless fragment_exists?” test and the call to “cache” in the view. If the cron job that cleans the cache directory executes between the two, the user will see an error.

I thought to myself, wouldn’t it be nice to give the view a lazy proxy in place of the array of results? The lazy proxy will only execute the query if it is needed. The controller becomes:

class ProductsController < ApplicationController
  def category
    @products = LazyProxy.new do
      Product.find_by_category(params[:id])
    end
  end
end

The LazyProxy magic is surprisingly simple:

class LazyProxy < Delegator
  def initialize(&block)
    @block = block
  end

  def __getobj__
    @delegate ||= @block.call
  end
end  

The block given to the constructor is saved, and not used immediately. The Delegator class from the standard library delegates all calls to the object returned by the __getobj__ method. The “||=” trick makes sure that the result of @block.call will be saved in an instance variable, so that the query is executed at most once.

So the idea is that the view will be given a lazy proxy for a query. If the fragment exists, the view code will not be evaluated and the proxy will not be used. No query. If the fragment does not exist, the lazy proxy is used and a query is executed. There is no race condition, for there is no test to see if the fragment exists.

What do you think?

Update One additional advantage of the lazy proxy is that you no longer need to make sure that the fragment key is the same on both view and controller.