I like burn-down (or burn-up) charts. I like seeing them on walls and in conference rooms serving as information radiators. But some of you might be abusing your tables without realizing it. Or, maybe your boss (or Product Owner) might be using them to harm you and your team.
Here are some phrases I’ve heard which might signal trouble…
1. “Why didn’t we earn as many points this week as last?”
2. “Why didn’t Rob finish all his points this iteration?”
3. “We need to finish 10% more points each iteration.”
4. “Bugs shouldn’t get points because we don’t want to reward our developers for fixing their bugs.”
Software development progress can be hard to measure, and the Product Owner wants to know “When will it be done?” So we give stories point estimates, and we start expecting these “points” to take on a magical quality. We start to believe in them, have faith in them, and feel they are an absolute quantity. Like other estimates, we might expect them to reflect reality (and get upset when they don’t!)
We track them on charts, compare people and teams with them and pledge allegiance to them. (“I pledge allegiance to a point…”) .
I’ve even heard tech leads and CTO’s comparing point velocity numbers at conferences, bragging about who has the highest speed. Foolishness.
Why do we do this?
I believe we do this because we are looking for a way to get a meaningful grip on our software production metrics. But in the process, we forget that points are inherently meaningless. Each team values them differently. Some think of them as “effort points,” others as “complexity points.” Some teams think an 8-point story is HUGE, yet others are comfortable with anything less than a 32.
Let me paint a broader picture: if you are grading your development teams using mystical points which have no direct business value, no clear definition and no objective comparison across other teams, you have a problem. Remember what W. Edward Deming said: “People with targets and jobs dependent upon meeting them will probably meet the objectives – even if they have to destroy the enterprise to do it.” Yikes!
Your programmers are smart and motivated. You want them to use their talented, motivated brains to build expensive software. But as Deming points out, if you create targets that their jobs depend on, they will meet them. But the organization may pay dearly, so you’d better choose your targets carefully and not abuse your people with them.
Don’t believe me?
I’ve seen smart, highly skilled programmers…
- Pad their estimates to ensure they meet the normal velocity.
- Split stories in multiple parts when management penalized them for updating a story’s view.
- Mark bugs as “features” to reduce bug count.
- Steadily increase their point estimates over time to accommodate management’s belief that velocity should increase steadily.
I don’t blame these programmers at all; I would do the same thing. I blame management, especially upper management, who want to manage by graphs and charts.
From measurement to usefulness
If this sounds familiar, it might be time to step back and look at the situation. If you like the charts, ask the team if they are useful, and what use they provide for them. That might explain a more important metric than if they are accurate.
In your 1:1 meetings you could ask team members if they feel pressured to meet velocity expectations, and how they’ve reacted to that expectation. Ask your programmers how your company could improve it’s point practice, along with how they feel about how execs or clients measure them. You’d be surprised the kind of ideas they might about have to improve things.
Ideas in hand you can think about how you track and report progress. Maybe you stick with points, but only report sliding window velocity averages. Perhaps you decide to estimate in ranges instead of single-number estimates. Whatever you do, strive to build transparency throughout the system. Otherwise, your programmers will see the systems as “gamed,” and they will expend tremendous energy to win. Or maybe they will leave.
Either way, you lose.