Friday, 16 March 2012

KPI KPI KPI

If you don't measure it, you can't manage it. How many of us have heard this?

Driving along the other day in the car I heard a noise. I can't tell whether it's getting louder, so I rigged up a microphone. And connected it to my laptop. Unfortunately there's a lot of background noise, so I keep my speed low. The noise seems to have disappeared.

Have I fixed the problem? Have I even measured it?

Some systems are acutely sensitive to measurement or the endeavour of measurement - you may be familiar with the observer effect - essentially it means that an observation of something affects what you are seeing. To observe a dark room you may need to illuminate it - it's no longer a dark room. It's hard to measure the air pressure in your car tyres without letting some air out, therefore you've changed the pressure.

And it's true also with human systems and processes. If you ask a team to report on the number of defects they have introduced, you may encourage a focus on defects, but you may also inadvertently encourage a failure to log trivial or quickly-fixed defects. I mean; nobody wants to be the team with the highest number of introduced defects, right? So we don't log the trivial stuff, perhaps we start forgetting about them, and we end up dropping defects into production.

So you then ask the team to report on defect fix times - surely an innocuous measure of how long a defect is outstanding? Yes, but longer is worse, so why raise the defect as soon as you know about it? Why not go have a chat with a developer first, discuss the problem, make a note on a piece of paper, wait for the solution to be found, and then log the defect, swiftly followed by a closure. The problem here is that your defects aren't being logged, trends can't be discovered, you may be inefficiently using the team - ironically real fix times elongate all because you've started to measure them.

How about measuring velocity? Certainly it's an 'output' measure - if a team delivers 400 story points in one sprint, and 300 the next, they've delivered 'less'. Is that a problem? Well it might be, so lets measure it. Hey presto, the next sprint they deliver 410, the following sprint 510 - excellent, we've got more output, right? Well no - all they did was relax their 'done' criteria so they didn't have to do so much performance testing - this allowed them to get on with more work, but the system is now 20% slower than it was before. A good result? Not really.

So we have to be careful what we measure, and thoughtful in what behaviour we believe it will encourage, and it leads us to consider what the purpose of the measurement is. I went to visit a company a couple of years ago who had transitioned to agile and asked them: how do you show that you are efficient? My question was clearly aimed to elicit a response which would include the metrics which they gather. The development manager looked at me quizzically and replied: we don't need to prove anything - we deliver value to our business at the end of every sprint, and they are happy to pay our wages.

The KPI in this case is clear and simple: delivery of value on a regular basis. And how would we assure that this continues? I believe it may boil down to three key scrum metrics:

1. commitment
2. delivered
3. done criteria

#1 simply allows us to ensure that the team is predictably delivering (as it must correlate with #2), #2 measures the value which the team actually delivers - it must be > 0, and must be acceptable to the customer, given our cost and #3 is our 'control' measure to simply ensure that no gaming of the other two values occur - in essence, to ensure that quality is maintained.

With these three, I believe we have all we need to measure, surely?

No comments:

Post a Comment