The Value of a Blocker Bug | TTC Canada

The Value of a Blocker Bug

Last week I had the pleasure of attending and speaking at the Test Automation and Digital QA Summit in Phoenix.

TTC Americas Nate Custer

I took notes from every other speaker's presentation but the one that captured my imagination was by Scott Shea called "The Value of a Blocker Bug". Scott introduced a model for thinking about the potential lost value of each bug.I've reached out to Scott and asked him for a link to his slide deck, where he explains this formula in a bit more detail.

The fundamental idea is that if you measure the value of your software and you had a blocking bug that prevented some number of users from accessing the software at all, then the potential value at risk due to the bug could be understood as:

Value at Risk for Bug = Total Value of Software * (Number Impacted Users / Total Users)

For a slightly more complex formula, you can add factors for lesser priority bugs so that they reduce the Value at Risk further, for example:

Blocker = 1
Severe = .8
Major = .4
Minor = .1
Trivial = .01

Scott acknowledged that this is a bit of a primitive model. The model assumes each customer contributes the same revenue for each customer; the priority factor is an arbitrary; and when considering lots of issues the model might suggest there is more value at risk than the total value of the software project. However he explained a few types of analysis this makes possible:

  • It allows for a simple numeric ranking of all bugs in the system. This may point out major defects that actually represent more Value at Risk then a severe defect experienced by a single user.
  • For software vendors it offers numeric comparison with the potential revenue from implementing a new feature vs. the potential risk of not fixing bugs instead. In some organizations, what is measured is what matters - this model provides a common way to analyze the choices in front of an organization.
  • It also provides a way to think about the overall known risk of potential lost value of all currently known and unresolved defects. Tracking this over time gives an organization insight into how they are improving the quality of the software they are delivering.

Having thought about this some more I'd offer a few refinements:

  • The core theory here is that customers will not renew / purchase new versions of software that is non-functional. I'd suggest some kind of decay factor, so that the longer a bug has been reported but it has not actually lead to a customer leaving, the less Value at Risk we give it.
  • Every defect reported also comes with a cost for the support interaction. Odds are that defects that are reported by many people will be experienced by even more users in the future. As such there is a potential additional support cost that might be represented with some kind of small exponential factor that increases the value at risk score if a bug is not effected due to potential future support costs.

As with most models for analysis, the usefulness in my mind is mostly about the chance the model will surprise you. We all have methods for deciding which issues to work on. When I've been in charge of making the choice about which issues to work on I was far from perfect. Models like this are not infallible nor do they lead to perfect choices - but openness to different ways of looking at problems and challenging your own approaches has frequently helped me improve my decision making over time.