I read the paper by Jeff Sutherland and others, Shock Therapy: A Bootstrap for a Hyper-Productive Scrum. There are some things I like in this paper, and some things that I don’t.
I like it that
- The coach insists on a proper definition of done, holding the team accountable for the quality of the work.
- The coach insists that only properly expressed stories are inserted in the team backlog. No vague stories, with no acceptance criteria.
- The coach insists that Scrum training is done beforehand, and all parties involved attend, both business and developers.
- The coach insists that all part of Scrum are implemented, with no “scrumbut.” The coach provides reasonable defaults for the (few) degrees of freedom of the Scrum framework.
- The coach works by helping the team solve problems by themselves, with the goal of getting the team to a point where they don’t need the coach anymore.
I don’t like
- The theme of forcefully enforcing the rules that pervades the paper. “Resistance is futile”… Bah.
- Measuring team maturity with velocity alone. Velocity is a highly suspect measure. Velocity depends on the stories estimate numbers, which are decided by the team. There is a part of subjectivity in the estimates. Did they estimate *all* the stories before starting work? Did they change *any* of the estimates later?.
- Velocity is also suspect because it bears no strict relation to return on investment. You might be very fast in developing software that does not give you a iota of profit. This does not seem to be a concern in this paper.
- Thirdly, it’s very easy to be faster than what we did in the first iteration. In my experience, in the first iteration a lot of effort goes in building a “walking skeleton” of the system, learning about the problem domain and project technologies, and so on.
- Fourth, it’s unclear what it means to compare “velocities” of different teams. Who did the estimates that are used to compare velocities? And how do you compare velocities with teams that are not even doing Scrum??
- Then I have issues with how the paper says nonchalantly “ATDD was used” as if practicing ATDD was easy. In my experience, which is also what many trainers say, it takes at least two or three months to begin to be proficient with TDD, let alone ATDD. How much experience and training did the developers have with ATDD? Does the “shock therapy” work well even when the team members are new to TDD and other XP engineering practices?
- No mention is made of the quality of the code produced by the teams. Was the high velocity bought at the expense of introducing technical debt? Was code quality even measured in any way? The paper does not say.
In conclusion, I found a few useful ideas in this paper, but I think it leaves a lot of questions unanswered. I have no problem believing that Jeff Sutherland can achieve very good results with his teams. I find the paper does not prove at all that there is a magic formula that guarantees high productivity.