A start up’s job to to (1) rigorously measure where it is right now, confronting the hard truths that assessment revels, and then (2) devise experiments to learn how to move the real numbers closer to the ideal reflected in the business plan.
Simple questions to ask start up’s:
- Are you making your product better?
- How do you know?
- You need innovation accounting
Innovation account works in three steps:
- Use a MVP to establish real data on where the company is right now. You can use this to start tracking progress
- Start up’s must attempt to tune the engine for the baseline toward the ideal. This will involve micro changes and product optimisations.
- Once the changes have been made you reach a decision point, pivot or persevere
If the company is making good progress toward the ideal, that means its learning and using that learning effectively.
MVPs provide the first example of a learning milestone, an MVP allows a start up to fill in real baseline data in its growth model – conversion rates, sign-up and trail rates, customer lifetime valued, etc. – and this is valuable as the foundation for learning about customers and their reactions to a product even if that foundation begins with extremely bad news.
Once your efforts are aligned with what customers really want, your experiments are much more likely to change their behaviour for the better. A healthy sign of success, or a successful pivot, is that: new experiments you run are overall more productive than the experiments you were running before (this means you should keep track of your experiments and compare performance).
Poor quantitative results force us to declare failure and create the motivation, context, and space for more qualitative research. These investigations produce new ideas – new hypotheses – to be tested.
A solid process lays the foundation for a healthy culture, one where ideas are evaluated by merit and not by job title. Teams working in this system begin to measure their productivity according to validated learning, not in term of the production of new features.
One way to achieve this is via the Kanban system. Where we have four states, backlog, In progress, built, validated, and only four stories in each of the four states. As stories flow from one state to another the buckets fill up. Once a bucket is full, it cannot accept more stories.
Only when a story has been validated can it be removed from the Kanban board. If the validation fails and it turns out the story was a bad idea, the relevant feature is removed from the product. The validation is measuring what you built / usage based on the data.
The three A’s of metrics: Actionable, accessible, and auditable:
For a feature to be considered actionable it must demonstrate clear cause and effect. Actionable metrics are the antidote to when cause and effort is not understood.
Accessible; e.g. Departments too often spend their energy learning how to use data to get what they want rather than as genuine feedback to guide their future actions. Make reports as simple as possible so that everyone understands them. Use tangible, concrete unties, e.g. what is a website hit.
Cohort-based reports are the gold standard of learning metrics: they turn complex actions into people-based reports. Each cohort analysis says: among the people who used our product in this period, heres how many of them exhibited each of the behaviours we care about.
The report deals with people and their actions.
Accessibility also refers to widespread access to the reports.
Reporting data and its infrastructure should be considered part of the product itself and owned by the product development team.
Auditable; You must ensure that the data is credible to employees. Reports should be drawn directly from the master data.