Build, Measure, Learn, Build

Eric Ries, in his book The Lean Startup, describes each iteration of product-market fit determination as a “build-measure-learn” cycle. The idea, apparent from the name, is to build an MVP of a solution to the problem-space, get it out there for a limited number of prospective customers to try out, get their feedback so that it can be measured, and finally learn important lessons from the experiment and tune the MVP to better suit the needs of the market. If you haven’t read my previous post on the importance of finding an acceptable Product-Market Fit, now might be a good time to do so.

Dan Olsen describes an slightly different perspective on this idea of the “build-measure-learn” cycle. In fact, he comes up with a different name for his own extended version – the design-test-learn-hypothesize cycle. Each piece is interesting to think about.


This is implicitly meant to be a phase wherein the product or an MVP of it is actually being built. Olsen expresses that it may not be necessary to go all the way in terms of implementing a version of the product before a set of hypotheses can be tested. Instead, a solution to the chosen problem-space can simply be designed, in the form of a UI/UX mockup or wireframes, and even that could be an acceptable representation to show some early-stage customers and get some feedback. Building an MVP could deplete precious resources before a hypothesis has been verified. Especially during the early stages of a product or feature hypotheses, it will probably be more reasonable to set out to test the hypotheses rather than an actually implemented MVP. In fact, depending on the product or service, there could even be deeper levels of granularity. The first hypothesis test could even be a conversation within the office about the hypothesis to determine if it is even worth designing. Quick iterations mean quicker feedback.


The “measure” phase of the cycle tends to insinuate that this measurement is something that is drawn from quantitative inputs. This need not be the only useful way to test a hypothesis. In fact, more often than not, it is only a part of the whole picture. Depending on the stage and nature of the product or feature, qualitative feedback could be vital.

Although Lean processes advocate data-driven decision-making, data should not be the key to making decisions, but instead should be used as a source of generating insights about a product. Decisions need to be focused on understanding customers and the experience they have while using your product. Quantitative data just does not capture that. As Micheal Bamberger explains during his guest appearance on This Is Product Management, A/B tests are not always reliable. His reasoning is that if you experimented with enough A/A split tests, at some point you are probably going to find out that A is outperforming A. That obviously makes no real-life sense, since A and A are exact replicas of each other. So then, is it safe to solely run A/B split tests to validate your design or hypothesis? A better way seems to be to use the A/B tests as a source of insight, and then use parts of the tests to understand the customer better so that it will enable you to further develop your hypothesis. In Olsen’s version, this is called the test phase.


Lastly, the “learn” phase in Ries’ cycle can be broken into two distinct phases – learn and hypothesize. At the end of a test, once there is new feedback on the current design, the product team will need to learn from it and figure out what it will take to address the issues and make it better. Feedback can and will also be positive, and it is important for a product team to not only work on the weaknesses but also capitalize on the strengths of the design that are brought out during a test. It is also important to hold onto the customers that gave positive feedback and reach out to them as they will comprise an important part of the initial customer-base. It is then time to go back to the drawing board and hypothesize. Note that the scope of this can go beyond the original value proposition. The new hypothesis can leverage the original value proposition, but if lessons were learnt about how the value proposition can be improved or even about how the problem-space being targeted is not a good one, then it might be time to consider having the “perseverance v.s. pivoting” debate. But the point is, pro-actively reformulating hypotheses is also something that should be a part of this iterative process.

The Bottom Line

Iterating before implementing is crucial; and it is as important to have quick iterations as it is to make sure each one is insightful.

Leave a Reply