Delivering an AI-enabled experience can make stakeholders feel a little out of control, which makes the job of the product manager especially challenging. One way to solve this problem is to set clear success metrics and be transparent about any supporting metrics that affect the users' experience. This is often easier said than done, and that is why I'm sharing a few tips that I’ve distilled based on my experience shipping AI-enabled solutions.
Tip 1: Align your work with the short or long-term product goal
Consider tying the success metrics of your project to a higher-level strategic goal your broader team is working towards. This helps your stakeholders to:
understand how your work is contributing to the goal (e.g. improve conversion, virality, etc.)
decide whether to continue investing in your work when things go south
evaluate certain tradeoff decisions at a higher level
provide more helpful feedback as to how you can improve
Tip 2: Track supporting metrics to surface problem early and often
Do you often face a similar situation as the quote presented below?
“Data scientists reported an accuracy rate of 95%, but the stakeholders weren’t impressed and our users are not at all delighted.”
If so, you’re certainly not alone. While it’s essential to track the success metrics as per your other product or feature work, it is certainly not enough when shipping an AI-enabled solution. This is because the data input almost always directly affects your end-user experience. If you are not tracking these data, you risk delivering a less than ideal experience to your end-users. Instead, make sure to add supporting metrics (e.g. accuracy, recall) for each of the success metrics that you are tracking, and make sure your data scientists are aware of the metrics you're optimizing for.
Tip 3: Pair accuracy with subjective data to guide benchmarking
Benchmarking based on pure data alone, and what’s possible from a technical perspective put you at risk of meeting a benchmark that doesn’t translate into the actual user experience. Instead, trying to guide that benchmarking decision with subjective data, such as user satisfaction or happiness. This will ensure that the benchmark set is indeed the experience that satisfies your users' needs.
Tip 4: Benchmark against a simpler solution
It’s easy to fall prey to the trap of solving a problem using machine learning when the problem could’ve been better solved by a simpler solution. Hence, try comparing the data between your AI-enabled solutions versus the simpler solution. Suppose you are replacing the @mention panel from rule-based recommendations to AI-enabled recommendations, you should almost always benchmark your metrics against the rule-based solution. Only when your AI recommendations are performing significantly better than before should you deploy them to your end-users. Otherwise, you’re investing significant effort into something that brings an inconsistent experience, without a clear gain.
Tip 5: Revisit your metrics often even after you’ve hit your goal
Setting the success metrics for your project is just the beginning. As you deploy your solution out into the world, your suggestions or recommendations change as new data comes in. Just because you’ve hit your goal a few months ago, it doesn’t mean that such a goal is still on track after a period of time. Make sure you go back and revisit the dashboard you’ve set up initially, and check if any adjustment is necessary.
All in all, I hope you see that while it’s easy to get away without setting success metrics for a typical software project, doing so in an AI-enabled experience will directly reflect on the quality of the experience you ship to your end-users. I hope these tips are helpful to you in your product journey to deliver a great experience to your users.