EN/FR

When to stop discovery

ArticleMartin BlanquerMartin B.

Discovery is the PM's best tool. It's also their best excuse to never ship.

Every product team that practices discovery has been through this moment: interviews pile up, insights accumulate, but no one says "OK, we know enough, let's go." There's always one more question to explore, one segment you haven't covered, one prototype to test. The cycle continues. The delivery backlog empties. And at some point, someone in the organization starts asking why nothing is shipping.

The problem is never knowing how to do discovery. Teresa Torres, Marty Cagan and dozens of frameworks cover that well enough. The problem is knowing when you know enough to decide.

This article proposes a decision framework for that question. Not a magic formula. A set of concrete signals that tell you it's time to ship, or to keep digging.

Reduce risk, don't chase truth

The often-cited figure of 5 interviews as a confidence threshold comes from a Nielsen Norman study on usability testing, not product discovery.nngroup It's a shortcut everyone repeats without questioning. Torres herself is clear on this: you'll never talk to enough customers to be certain of your decision.

What she proposes instead is more interesting. A product team's goal isn't to find the truth — it's to reduce risk to a level that's acceptable for the business. Nothing more. Discovery is a mitigation tool, not a quest for certainty. And like any tool, it has a point of diminishing returns.

Karl Weick defines wisdom as "the balance between confidence in what you know and doubt about what you know." That's exactly what discovery should be: not a tool to eliminate doubt, but a tool to learn how to decide with it.

The decision framework

In practice, "acceptable" remains a fuzzy concept. Discovery becomes a comfort zone. A space where you feel productive without making a decision. Herbig puts it well: swapping the number of features shipped for the number of user interviews is just changing hamster wheels. Same wheel, different direction.

I've been there. We were rethinking the dashboard of a B2B product, the kind of topic where every stakeholder has their own vision. After 8 interviews in 3 weeks, the pattern was there: users wanted to see less data, not more. But the team wasn't comfortable with that answer. Too simple. So we kept looking for the nuance that would justify a richer dashboard. We shipped three weeks later, once we finally admitted the answer wasn't going to change. Discovery hadn't lasted too long because we lacked signal. It had lasted too long because we didn't like the signal.

Two questions are enough to cut through this. Crossing them gives you a matrix that covers the vast majority of cases.

Reversibility

This is the one-way door / two-way door framework popularized by Amazon and systematized at Stripe by Shreyas Doshi.shreyasdoshi The first question to ask yourself: can you go back?

Is your decision reversible?
Yes
Two-way door
Low cost of error
No
One-way door
High cost of error

Hover to compare characteristics.

Most features PMs hesitate on are two-way doors treated as one-way doors. The reverse is more dangerous: treating a real one-way door as something you can "iterate on later."

The opposite also happens. On a SaaS product, we had to choose an LLM provider to automate part of our analysis. The team wanted to ship an MVP fast. "We'll iterate." Except our enterprise clients had contractual clauses around data processing. Choosing an AI provider meant committing the company to a legal framework you couldn't undo in a sprint. One-way door. We took 4 weeks to validate the choice with legal and clients. Without that extra discovery, we would have spent 6 months renegotiating contracts. This is the case where slowing down was the right call.

Impact

A feature that touches 2% of your users and the launch of a new pricing plan don't require the same level of confidence. To evaluate impact quickly, three questions are enough:

A color change on a CTA button touches 100% of users, but the consequence is minor and the correction immediate. Low impact. A pricing migration touches 30% of accounts, but the consequence is churn and the correction takes 6 months. High impact, even though reach is lower.

If a feature is estimated at one day of dev, spending a week in research makes no sense. Launch it, measure, iterate. The level of discovery should be proportional to impact, not uniform.

The matrix

By crossing these two axes (reversibility and impact), you get four clear decision zones.

High
Low
ReversibleIrreversible
Quick validation
One or two tests to confirm the intuition
Deep discovery
Every risk axis deserves an answer
Ship now
Real feedback beats research
Targeted discovery
Investigate the main risk
Reversibility
×
Impact

The signals

The matrix is set before you start: how deep should discovery go for this decision? The signals are read during: do I know enough now?

When to stop

Two signals are enough. If you have them, you can ship.

Interviews converge. You hear the same frustrations, the same workarounds, the same words. Interview #8 teaches you nothing that #4 hadn't already told you. That's saturation. You don't need a magic number to recognize it.

You can formulate the bet. If you can summarize in one sentence what you're betting on, which segment, and why — you know enough. If you can't, it's not more discovery you need. It's more clarity on what you're looking for.

Three secondary confirmations that reinforce the decision:

When to keep going

One signal is enough to not ship.

You can't name the main risk. Cagan identifies four risks: value, usability, feasibility, business viability. If you don't know which one is your dominant risk, you're not ready.productboard The absence of a named risk isn't a sign of safety. It's a sign of ignorance about what could go wrong.

Three complementary warning signals:

These signals can be summarized in three quick questions:

I know my main risk

The courage gap

In my experience, most overly long discovery cycles aren't caused by a lack of signal. They're caused by a lack of courage to decide. The opposite bias exists, and shipping too fast on a one-way door is costly. But the common imbalance leans heavily toward prolonged discovery.

Discovery is reassuring. It gives the impression of controlling uncertainty. But wisdom in product is the balance between confidence and doubt, not the elimination of doubt. "Let's do more research" sounds good. It decides nothing.

I've seen the opposite work. We were hesitating on an internal search engine overhaul. 6 interviews converged, reversible feature, main risk identified. The team wanted to prototype before building. The prototype would have taken longer than the dev. We shipped. Within 48 hours, usage data revealed a problem no interview had predicted: in production, users don't type clean queries — they copy-paste raw text from their emails. Result quality collapsed. Fixed in a few days. No prototype would have caught that.

The problem isn't courage or cowardice. It's the absence of tools to turn discomfort into decision. Three practices that help:

The PM who ships at 70% confidence and corrects in flight delivers more value than the one who reaches 95% confidence three months too late. Next time you're wondering if you know enough, try completing this sentence:

Takeaway

« We bet that will because . We'll know in by measuring . »

If you can, ship. If you can't, the problem isn't the volume of research. It's the clarity of what you're looking for.

Let's work together

Got a product challenge, a project to kick off, or just a question? Drop me a line.

Send an email