RICE scoring in Aha! for prioritisation

Working out what to build next with the tried-and-tested quantitative method of RICE Prioritisation with Aha! Product Management scoring.

There are many ways to work out the order in which your ideas should be brought to life and it’s soooo tempting to go with your gut. You know what’s important, it’s obvious, Client X mentioned it just the other day, so it must be the most important thing, right?!

Ok, maybe not. Regardless of whether scoring guides your prioritisation or is your prioritisation, scoring is a low-cost way to determine the relative value of any number of things you may work on.

What is RICE Scoring?

If you’re here you probably know what RICE scoring is, but here’s a quick summary:

  • Reach: How many customers will be impacted?
  • Impact: To what extent will each customer be impacted?
  • Confidence: How confident are we in the three other scores?
  • Effort: How much time/effort/complexity is required?

Configuring RICE in Aha!

Aha! Scoring Metrics only work with whole numbers, so here’s how I configure RICE scoring:

  • Reach: Linear scale, 0–100, steps of 10
  • Impact: Custom scale, 25, 50, 100, 200, 300
  • Confidence: Linear scale, 0–100, steps of 10
  • Effort: Linear scale, 1–12

These numbers may seem a bit out-of-a-hat but each one has some reasoning behind it:

Reach

This one’s simple: it’s the percentage of customers who will be impacted by the work.

It might refer to your entire customer base or perhaps a specific segment — perhaps you have a segment you’re focusing on that this could refer to and you’re asking “What percentage of these customers will be impacted?”.

This basically measures the proportion of customers you most care about who will be impacted.

Impact

Expressed as a percentage, Impact refers to how much each user will be impacted. If Reach is “How many?” then Impact is “To what extent?”.

I use the numbers 25, 50, 100, 200 and 300, which seem a bit random but they’re a relative scale for which you need a baseline of a few example features.

For example, 100 might be a feature in an existing module, whereas 50 could be a minor improvement to a feature and 25 may be a tweak or usability improvement. Moving up the scale, 200 could be a major refactoring of a feature/module and 300 might be a new module or set of features entirely.

Confidence

Expressed as a percentage, how confident are you in the accuracy of the other three parts of the RICE equation?

I know, I know, we haven’t got to Effort yet and ideally Confidence would come last but RIEC just doesn’t sound as good and if we started estimating customers’ resultant Happiness we’d be nearing a whole different realm of fish.

In practice this comes down to how much validation exists for the proposed work and how sure we are that it’s necessary. Implementing HTTPS may be 100% but a tweak a single customer suggested last year may be closer to 10%.

Effort

How many months the work requires from start to finish, including everything from design, validation, development, testing and deployment. You could also think of this as delivery time.

Estimating effort has historically been a point of contention, particularly amongst Scrum Masters. There still doesn’t seem to be a consensus on whether a Story Point refers to time, effort, complexity or some other equally-intangible thing.

Nevertheless, the most useful metric in my experience is the total number of months it will take to deliver an item. You can work on multiple things at once, of course, but I find this metric to be the most actionable when it comes to projecting how difficult and time-consuming something will be.

How it works in Aha!

Here’s a screenshot of my configuration from Settings > Account > Scorecards:

The equation is ( (Reach* (Impact/100) * (Confidence/ 10) ) /Effort) / 30 which, using the scales above, gives you a number between 0–100 which is useful because if all your scoring methods yield a similar range then scores can be compared between product lines.

In short, you multiply Reach, Impact and Confidence, then divide by Effort then divide again by 30.

One interesting property of this equation is that if Confidence = 0 then Score = 0, which totally makes sense: if you have no confidence in your estimates whatsoever then you shouldn’t proceed.

Where to use this?

RICE scoring works in most contexts but does particularly well for working out the order in which you should tackle Initiatives and the importance of each Feature, by which I mean which features are critical and which are not.

As we approach the end of December I’m turning a very critical eye to what I’m doing in the year ahead so now is the perfect time to be revisiting past assumptions and working out what’s really important for 2018.

One Comment

Leave a Reply

Your email address will not be published.