Friday, 16 March 2012

KPI KPI KPI

If you don't measure it, you can't manage it. How many of us have heard this?

Driving along the other day in the car I heard a noise. I can't tell whether it's getting louder, so I rigged up a microphone. And connected it to my laptop. Unfortunately there's a lot of background noise, so I keep my speed low. The noise seems to have disappeared.

Have I fixed the problem? Have I even measured it?

Some systems are acutely sensitive to measurement or the endeavour of measurement - you may be familiar with the observer effect - essentially it means that an observation of something affects what you are seeing. To observe a dark room you may need to illuminate it - it's no longer a dark room. It's hard to measure the air pressure in your car tyres without letting some air out, therefore you've changed the pressure.

And it's true also with human systems and processes. If you ask a team to report on the number of defects they have introduced, you may encourage a focus on defects, but you may also inadvertently encourage a failure to log trivial or quickly-fixed defects. I mean; nobody wants to be the team with the highest number of introduced defects, right? So we don't log the trivial stuff, perhaps we start forgetting about them, and we end up dropping defects into production.

So you then ask the team to report on defect fix times - surely an innocuous measure of how long a defect is outstanding? Yes, but longer is worse, so why raise the defect as soon as you know about it? Why not go have a chat with a developer first, discuss the problem, make a note on a piece of paper, wait for the solution to be found, and then log the defect, swiftly followed by a closure. The problem here is that your defects aren't being logged, trends can't be discovered, you may be inefficiently using the team - ironically real fix times elongate all because you've started to measure them.

How about measuring velocity? Certainly it's an 'output' measure - if a team delivers 400 story points in one sprint, and 300 the next, they've delivered 'less'. Is that a problem? Well it might be, so lets measure it. Hey presto, the next sprint they deliver 410, the following sprint 510 - excellent, we've got more output, right? Well no - all they did was relax their 'done' criteria so they didn't have to do so much performance testing - this allowed them to get on with more work, but the system is now 20% slower than it was before. A good result? Not really.

So we have to be careful what we measure, and thoughtful in what behaviour we believe it will encourage, and it leads us to consider what the purpose of the measurement is. I went to visit a company a couple of years ago who had transitioned to agile and asked them: how do you show that you are efficient? My question was clearly aimed to elicit a response which would include the metrics which they gather. The development manager looked at me quizzically and replied: we don't need to prove anything - we deliver value to our business at the end of every sprint, and they are happy to pay our wages.

The KPI in this case is clear and simple: delivery of value on a regular basis. And how would we assure that this continues? I believe it may boil down to three key scrum metrics:

1. commitment
2. delivered
3. done criteria

#1 simply allows us to ensure that the team is predictably delivering (as it must correlate with #2), #2 measures the value which the team actually delivers - it must be > 0, and must be acceptable to the customer, given our cost and #3 is our 'control' measure to simply ensure that no gaming of the other two values occur - in essence, to ensure that quality is maintained.

With these three, I believe we have all we need to measure, surely?

Friday, 9 March 2012

In Pursuit of Perfection

Very often i'll hear the conversation in meetings turning naturally towards a consensus that something is missing. Something which, if it were to exist, would have prevented the situation, would have made things easier, would have saved us money, saved poor Jonny from falling down the well, etc etc

And most of the time, this is absolutely correct. I mean, given that extra £100K we asked for, given the extension on the project, given the physical redundancy that we asked for 'last year' or the magic document which would have clarified the situation, we wouldn't be in this mess.

But we need to stop and think carefully about why this didn't happen.

Friday, 2 March 2012

Windows 8

Is it just me, or is Windows 8 a massive disappointment?

I'm a recent Mac/iAnything convert and love the way Apple lag behind the trend, appear to consider the best way to implement something, then just make it all look so easy. The Mac OSX multi-desktop is a good example - there have been desktop managers around on Windows for years - i've tried many and none have lasted. I've bought bigger monitors or reduced my font size rather than struggle with an odd virtual desktop thing which never seems to behave rationally and consistently. But on the mac, just do a three-finger swipe to the left or right and your other desktop slides into view - as natural as opening a drawer and looking at what is inside, or turning over a book to read the back cover.

So I rather expected something exciting and new from Microsoft with Windows 8 - something of a game-changer, something which would make OSX look old fashioned and perhaps show us that there is life in the 'old dog' Microsoft yet. But no - what we have is yet another 'skin' on the familiar Windows 95 UI with a fancy codename - Metro.