Prioritize Effectively with a Product Scorecard
How do you ensure that your team focuses on the most valuable improvements within your platform? On paper, every story, bug, task, epic or initiative can be justified via a business case, but in practice, as a product owner, you simply do not have the time for this. How do you ensure that you make well-considered decisions in forming and prioritizing your roadmap? A scorecard can offer a solution here, but where do you start?
Prioritizing features is at the heart of the product owner’s work. And yet, many product owners prioritize based on gut feeling, or on the needs of the noisiest person at the table. In this article I explain how you can use a scorecard to put all interests in the right context and prioritize features. This scorecard approach keeps you focused on customer satisfaction and business strategy, while reducing risk.
What is a product scorecard?
There are many definitions and applications of scorecards within product management. Since we are now focusing on prioritization, my definition is as follows:
A product scorecard is a system used by product owners to prioritize features, based on balanced KPIs that are in line with business strategy and product vision.
In this article I use the simplified example below:
Ingredients of a product scorecard
Based on the example above, I explain all parts of the scorecard:
Weighting factors and weights
The foundation of your scorecard is your weighting factors. These factors are a direct translation of your business strategy and your product vision. Determine the KPIs that your product can contribute to the most and where you think your product can make the difference. Some examples of weighting factors:
- Revenue increase:the feature contributes to increasing your turnover. Think of increasing traffic (e.g. through SEO), conversion (e.g. optimization of your checkout) or retention (e.g. improving your aftersale emails);
- Cost savings:the feature contributes to saving internal costs. Think of internal process optimization (e.g. automating manual work) and license savings (e.g. replacing costly external tooling with in-house development);
- Insight acquisition:the feature contributes to the acquisition of insights. Think of obtaining qualitative (e.g. an online questionnaire) or quantitative insights (e.g. the implementation of an analytics tool);
- Customer satisfaction:the feature contributes to improving user experience or NPS. Think of improving your mobile experience (e.g. making your webshop responsive), launching useful tools (e.g. choice wizards, apps) or making functionalities more intuitive (e.g. simplifying your checkout);
- Performance or stability improvement:The feature contributes to speeding up page load times or making your platform more stable. Think of optimizing your code (e.g. simplifying your CSS code), performing updates in your platform (e.g. updating your PHP version) or implementing technical process optimizations (e.g. setting up an OTAP street);
Not every weighting factor will count equally in your prioritization. That is why we attach a weight to each factor. Expressed in a percentage, where the sum of weights is 100%, you can even better make a translation of your business strategy and product vision.
A dangerous, but sometimes necessary, factor is a wildcard. If there are features that have to be performed ‘at whatever it costs’, I use a wildcard. This wildcard ensures that the feature gets the highest final score linea recta. For example, what kind of features can a wildcard get? Think of features that:
- Necessary to comply with a particular law;
- Necessary to prevent a major security risk;
- Solving business-critical problems;
- Be on the critical path of internal necessary business programs;
An important factor that is often forgotten is the size of the feature for your development team. If you do not include this value, you run the risk of placing giant features with a slightly higher final score above features that are many times smaller in size. Since estimating the size of a feature at this stage is usually still very guesswork, I always consult the team to assist me in making a so-called “t-shirt estimation”, where I use the values “S”, “M”, “L” and “XL”. In my underlying formula, I assign an absolute number to each t-shirt size. An example:
|T-shirt size||Estimated number of sprints||Calculation value|
Decide for yourself which t-shirt sizes you use and which sprint quantities and calculation values you assign to each t-shirt size. If features are larger than your largest t-shirt size, you may want to consider dividing the feature into two separate features.
Based on all factors, you calculate the final score per feature based on a formula.
= ( [gewicht van wegingsfactor 1] ( ( times [feature score van wegingsfactor 1] ) plus ( [gewicht van wegingsfactor 2] times ) plus ( [feature score van wegingsfactor 2] [gewicht van wegingsfactor …] times ) ) divided by ) [feature score van wegingsfactor …] [development omvang] plus [‘100’ indien wildcard van toepassing is]
Now that the column headings have been highlighted, it is important to fill the scorecard with features. For each weighting factor, you come up with a relative score per initiative (0 (low) to 100 (high)) compared to the other initiatives. In addition, you enter whether a wildcard applies and what the t-shirt size of the feature is. Based on the formula, a final score is automatically rolled out. The higher the final score, the higher the priority on your roadmap.
- Ensure buy-in with senior management:your scorecard can only work if you have aligned this ‘top-level’. Make sure senior management supports your interpretation and translation of KPIs. Only in this way do you create support and you have your answer ready for ‘that noisy one’ at the conference table;
- Determine the scope of each initiative in advance:it is very important that you have established a clear scope for each feature in advance. Based on this scope, you can only really determine the scores and development scope. This sounds logical but is often quickly forgotten in practice;
- Keep assessing objectively: this tip also sounds like a no-brainer but often disappears into the background. Forget that annoying stakeholder, throw your personal preferences overboard and only look at facts and best practices;
- Estimate development scope with developers:I have made the mistake in the past to estimate the development scope myself or to carry this out together with a (far from practice) architect. As is also the case for the final estimation, these high-level estimates of the development scope also apply: do this together with one or a few experienced developers. After all, they know the product best and know better than anyone what needs to be done to deliver the feature with its scope;
- Reevaluate scores in the meantime: the scores you assign to features are relative. This means that the scores are determined in perspective of other features. If you add a feature at a later time that is higher than a feature with a score ‘100’, you must adjust all features downwards (for the relevant weighting factor). In addition, new insights within features can always lead to the revaluation of his score;