MVPverse — How much feedback is good feedback?
If you have followed this series of articles that I am calling MVPverse, you would have caught that the sole reason for an MVP is to allow a team to collect the largest amount of validated learning about customers with the least effort.
The two things to take out here are
- The largest amount of validated learning
- The least effort
The largest amount of validated learning — If you think about an MVP, it generally has an idea of the problem and a hypothesis for a solution. Since it is an MVP we can assume the problem is narrow enough (not bringing world peace), and the hypothesis has a particular outcome, in the modern-day product world it is known as the metrics, so you have a metric structure/framework ready.
The outcome governs how large of learning you would need to satisfy the hypothesis. If you want to learn more about hypothesis-driven MVP development, I am afraid you have to wait for me, but if you are just like me and restless you can listen to this brilliant talk — Hypothesis Driven Validation By Nate Archer
The least effort — Well it is not one pot meal that you can watch a youtube video and spin up, in most cases even that fails. So you will have to put up a bit of an effort, it could be the next unicorn. But, you certainly can minimise the effort, by selecting the right early evangelists and selecting the right method to get feedback.
So in the end feedback that satisfies your hypothesis and is driven by your early evangelists is good enough feedback.