The team did everything right. They planned meticulously, executed on time, communicated clearly, hit every internal milestone. And the result was a disappointment — not a disaster, but a slow, creeping miss. The kind where nobody can point to a single failure because there was not one. The effort was excellent. The outcome was not.
You have lived this. Perhaps it was a product launch that landed quietly. A hire who looked perfect on paper and then struggled to function in your operating rhythm. A strategy that made complete sense at the planning stage and somehow dissolved on contact with reality. The instinct afterwards is to look for what went wrong in the execution — who dropped the ball, which process failed, where the timeline slipped. And sometimes that search finds something useful. But more often, it misses the actual problem entirely.
Because the execution was fine. The map was wrong.
Every strategy is a bundle of assumptions. Every decision rests on a set of beliefs about how the world works, what customers want, how people will respond, which constraints are real and which are imagined. Most of the time, those assumptions are invisible. They live underneath the plan, never named, never examined, treated as obvious. And when the outcome disappoints, the review process skips right past them. It asks “What did we do wrong?” instead of asking the harder, more productive question: “What did we believe that turned out not to be true?”
Every strategy is a bundle of assumptions. If you do not name them, you cannot update them.
Assumptions, Facts, and Hypotheses — A Useful Distinction
Before we go further, it helps to be precise about what an assumption actually is — because the word gets used loosely, and loose definitions produce loose thinking.
A fact is something observable. Revenue was $2.3 million. The product shipped on 14 March. The customer renewed. Facts are verifiable. You can point at them.
A hypothesis is a belief you know is uncertain. You have framed it as testable. “We believe that reducing onboarding steps from seven to three will increase conversion by 15%.” There is uncertainty, but it is conscious and structured. You know you are guessing, and you have built a way to check.
An assumption is the dangerous middle ground. It is a belief treated as fact — something you hold to be true without noticing that you are holding it. Assumptions are not tested because they do not feel like guesses. They feel like the way things are. “Our customers care most about feature depth.” “Experienced hires from our industry will outperform.” “More alignment meetings will reduce miscommunication.” Each of these might be true. Each might be false. The problem is not that you believe them. The problem is that you believe them without noticing that you are believing them.
The most costly assumptions are not the ones you get wrong. They are the ones you never realise you are making. An assumption that has been named can be examined, tested, and updated. An assumption that remains invisible just keeps running the show — shaping every decision downstream without anyone noticing it is there.
The goal of an assumption autopsy is to move beliefs from that invisible layer to the visible one. Not to prove them wrong. Not to punish anyone for holding them. Simply to name them, so they can be evaluated on their merits rather than accepted by default.
Why Postmortems Fail
Most organisations have some form of review process after things go wrong. Postmortems, retrospectives, after-action reviews, lessons-learned sessions. The names vary. The failure mode is remarkably consistent.
The typical postmortem focuses on three things: blame (who made the error), process (which step was missed), and surface fixes (what do we add to the checklist so this does not happen again). These are not useless questions, but they operate at the wrong level. They examine the actions without examining the thinking that produced the actions. They ask “What went wrong?” without asking “What did we believe that led us here?”
The result is a review that generates process patches — more steps, more sign-offs, more checkpoints — without ever touching the model underneath. The model stays intact, silently shaping the next round of decisions. And six months later, a different version of the same failure appears, wearing different clothes.
A technology company launches a new product feature. Adoption is well below forecast. The postmortem identifies several issues: the marketing launch was poorly timed, the documentation was incomplete, and the sales team was not briefed early enough. All true. All fixable.
What nobody names is the underlying assumption: “Our existing users actively want more features.” In reality, the customer base was experiencing feature fatigue — they wanted fewer, better-integrated capabilities, not more options. The assumption was never surfaced, never tested, never challenged. So the process gets fixed, and six months later the same team launches another feature into the same fatigue, with better timing, better documentation, better sales enablement — and the same underwhelming result.
The execution improved. The model did not.
This is the trap: postmortems that stay at the action layer produce better execution of the same flawed strategy. They make you more efficient at doing the wrong thing. And because the process visibly improved, everyone feels like learning happened — when in fact the most important lesson was never extracted.
The Assumption Autopsy Format
An assumption autopsy is not a replacement for a postmortem. It is a layer you add beneath it. Where the postmortem asks “What happened?” the assumption autopsy asks “What did we believe, and was it true?”
The format is deliberate and structured. It has five steps, and they work best in order.
Step 1: Outcome Recap
State what happened, in factual language. No narrative, no spin, no softening. This is data, not storytelling. “We launched the product on 15 September. Adoption in the first 60 days was 12% against a forecast of 35%. Revenue impact was $180K below plan.” Keep it short. Keep it honest.
Step 2: Key Decisions
List the three to five decisions that most shaped the outcome. Not every decision — the pivotal ones. “We chose to build for enterprise before SMB.” “We priced at $X because of competitor positioning.” “We hired for domain expertise rather than operating tempo.” These are the branching points where the path was set.
Step 3: Assumptions Underlying Decisions
This is the core of the autopsy. For each key decision, ask: What did we believe that made this feel like the right choice? The answers are your assumptions. They often come in forms like:
- “Enterprise clients will pay a premium for this capability.”
- “Our competitor’s pricing reflects the market’s willingness to pay.”
- “Industry experience translates directly to performance in our context.”
Notice that each of these sounds reasonable. That is what makes assumptions dangerous — they sound self-evident. The discipline is to write them down anyway, especially the ones that feel obvious, because obvious-sounding assumptions are the ones least likely to be examined.
Step 4: Evidence We Had vs. Evidence We Ignored
For each assumption, ask two questions. First: What evidence did we have at the time that supported this? Second, and more uncomfortable: What evidence existed that contradicted it, and did we pay attention?
This step is where the real learning lives. Almost always, disconfirming evidence was present — a data point that did not fit, a customer comment that was dismissed as an outlier, an internal voice that raised a concern and was talked past. The goal is not to assign blame for missing it. The goal is to understand why it was easy to miss — what made the confirming evidence feel more credible, more comfortable, more convenient.
Step 5: Update and Operationalise
For each assumption that proved inaccurate or incomplete, write the updated belief: What do we believe now? Then — and this is what separates an autopsy from a philosophical exercise — name the operational change. What are we doing differently as a result? Not “we will be more careful” (meaningless). Something specific: “We will run a pricing test with 50 prospects before committing to a price point.” “We will include a two-week operating rhythm trial before confirming any senior hire.”
Outcome: Feature X launched to 12% adoption vs. 35% forecast.
Key decision: Prioritised building Feature X over improving existing onboarding flow.
Assumption: “Users value feature breadth. More features = more retention.”
Supporting evidence: Feature requests in support tickets. Competitor had a similar feature.
Disconfirming evidence (ignored): NPS comments about complexity. Churn data showed users leaving after third month, not first. Internal UX researcher flagged “feature fatigue” in quarterly review.
Updated belief: Retention is driven by ease and integration, not breadth. More features without simplification increases cognitive load and accelerates churn.
Operational change: Before building any new feature, run a 50-user prototype test measuring ease of adoption, not interest level. Add a “complexity cost” estimate to every feature proposal.
Outcome: Senior hire with strong industry background underperformed for 8 months before mutual exit.
Key decision: Hired for pedigree and domain experience over operating fit.
Assumption: “Someone who has done this role at a comparable organisation will translate immediately.”
Supporting evidence: CV, references, interview performance.
Disconfirming evidence (ignored): Culture interview flagged pace mismatch. The candidate asked repeatedly about reporting structure (a signal of hierarchy dependency). Previous environments were significantly more structured.
Updated belief: Domain knowledge is necessary but not sufficient. Operating rhythm compatibility — pace, autonomy tolerance, ambiguity comfort — predicts performance better than experience in a similar role.
Operational change: All senior hires complete a two-week paid trial project before a final offer. Evaluate for operating rhythm match, not just capability.
Outcome: Despite increasing meeting cadence by 40%, cross-team alignment did not improve. Decision velocity actually slowed.
Key decision: Added weekly sync meetings across three departments to “increase alignment.”
Assumption: “More meetings = better alignment. If people are talking more, they are coordinating better.”
Supporting evidence: Teams reported feeling “out of the loop” before the change.
Disconfirming evidence (ignored): “Out of the loop” feeling was about decision transparency, not conversation frequency. Meeting notes were rarely read. Key decisions were still being made in ad-hoc Slack threads after the meetings.
Updated belief: Alignment comes from decision visibility, not meeting frequency. People need to know what was decided and why, not to sit through the conversation that produced it.
Operational change: Replace three of the four weekly syncs with an async decision log. One meeting remains for genuinely contested decisions. Decision log reviewed in monthly strategy session.
The Assumption Register: Making It Systemic
An autopsy is retrospective — you do it after the outcome has landed. That is valuable, but it is not enough. The more powerful practice is to surface assumptions before they produce outcomes, so you can test them while the cost of being wrong is still low.
The tool for this is an assumption register — a living document that makes your team’s current assumptions explicit and trackable. It is not complicated. It does not require new software. It requires the willingness to write down what you believe and the discipline to check whether you are right.
The Assumption Register
A living document with five columns. Keep it visible. Review it regularly. The discipline is in the updating, not the creating.
| Assumption | Confidence | Supporting Evidence | Disconfirming Signal to Watch | Owner & Review Date |
|---|---|---|---|---|
| “Enterprise clients will convert at 8% from trial” | Medium | Industry benchmark; 3 warm leads | Fewer than 2 conversions from first 40 trials | JM · 15 Mar |
| “Async comms will reduce meeting load by 30%” | Low | Two case studies; internal pilot data | Meeting hours unchanged after 6 weeks | SR · 1 Apr |
| “New pricing tier attracts SMB without cannibalising mid-market” | Medium–High | Competitor pricing; 20 prospect interviews | Mid-market downgrades exceed 5% in Q1 | KT · 30 Apr |
The “disconfirming signal” column is the most important. It forces you to name, in advance, what it would look like if you were wrong. Teams that define disconfirming signals before the evidence arrives are dramatically faster at updating when the signal appears — because they are watching for it, not defending against it.
The Disconfirming Signal Discipline
This is worth dwelling on, because it runs against every natural instinct.
When you believe something, your brain looks for evidence that supports the belief. This is not a character flaw. It is how human cognition works. Psychologists call it confirmation bias, and it is as reliable as gravity. You do not overcome it with intelligence or willpower. You overcome it with structure.
Strong teams — the ones that learn fastest and waste the least — build a specific practice: they define, in advance, what would count as evidence against their current assumptions. They write it down. They assign someone to watch for it. And when it appears, they treat it as information, not as a threat.
Weak teams do the opposite. They hunt for confirming evidence. They dismiss contradictions as outliers. They explain away unexpected results. They protect the model at the expense of accuracy. Not out of dishonesty — out of the entirely human desire to feel right, to feel competent, to avoid the discomfort of admitting that the plan was built on sand.
The speed at which a team updates its assumptions is a better predictor of long-term performance than the accuracy of its initial strategy. Getting it right first time is luck. Getting less wrong faster is a system.
The goal is not to be right. The goal is to get less wrong faster.
Keeping It Blame-Free
None of this works if it becomes a blame exercise. The moment people feel that naming a wrong assumption means admitting a personal failure, the assumptions go underground and the learning stops.
Three rules make this sustainable:
- Assumptions can be wrong without people being bad. This is not a platitude. It is the operating principle. An assumption is a model of the world, not a measure of competence. Models get updated. That is how learning works. If updating a model carries a social cost, people will protect their models instead of testing them — and the organisation will learn slowly or not at all.
- Focus on model accuracy, not personal accountability. The question is never “Who believed this?” The question is “Was this accurate, and what do we believe now?” The shift from who to what changes the entire emotional tone of the conversation. People defend themselves. They do not need to defend data.
- Reward early truth-telling. The person who says “I think our assumption about X is wrong” three months into a project is doing the most valuable thing anyone can do in a strategy conversation. If that person gets silence, scepticism, or subtle punishment, they will not do it again. If they get genuine thanks and visible follow-through, they — and everyone watching — will do it more often. The incentive structure has to reward updating, not defending.
This is a cultural practice, not a one-off exercise. It takes time to build. But every time someone names an inaccurate assumption and the response is curiosity rather than blame, the culture shifts a fraction — and the next person finds it a little easier to speak up.
What Most Teams Get Wrong
- They review actions but not beliefs. The postmortem asks “What did we do?” but never “What did we believe?” The action layer gets examined in detail. The model layer — the assumptions that generated the actions — goes untouched. The result is better execution of the same flawed strategy.
- They treat assumptions as permanent. Once a strategy is set, its underlying assumptions become invisible infrastructure. Nobody revisits them until something fails badly enough to force a reckoning. By then, the cost of the wrong assumption has compounded for months.
- They confuse confidence with evidence. A strongly held belief feels like a well-supported belief. But conviction and data are different things. The assumption register forces the distinction: How confident are you? is a separate column from What evidence do you have? When those two columns diverge — high confidence, thin evidence — you have found a vulnerability.
- They make the review punitive. If the autopsy is experienced as a blame session dressed in learning language, people will either avoid it or game it. The assumptions named will be safe ones. The real models — the ones that actually drove the decisions — will stay hidden.
The fastest way to kill assumption-level learning is to punish the people who name inaccurate assumptions. The second fastest way is to skip the practice when things go well.
Cadence: Where This Lives in Your Operating Rhythm
The assumption autopsy is not a crisis response. If you only do it when something goes obviously wrong, you will miss the slow-moving assumption failures that erode performance over months without producing a single dramatic event.
Three integration points work well:
- Quarterly strategy review. Every quarter, pull out the assumption register and ask: Which of these are we still confident in? Which have new data? Which should we test? This is a fifteen-minute addition to a meeting you are already having. It keeps assumptions visible at the strategic level.
- Monthly learning review. Once a month, pick one decision from the past 30 days and run a lightweight autopsy. Not because it went wrong — because it went. Good decisions have assumptions too. Examining them when the outcome was positive prevents the “success = validation of all assumptions” trap, which is how stale models survive.
- After major launches or initiatives. Any significant project deserves a full five-step autopsy, regardless of outcome. The question is not “Did it work?” but “What did we believe, and was it accurate?” Successes built on wrong assumptions are more dangerous than failures built on right ones, because they teach the wrong lesson.
The cadence matters because assumptions decay. What was true about your market, your team, your customers, or your competitive position six months ago may not be true now. A fixed assumption in a changing environment is a vulnerability. Regular review is how you keep the model current.
Assumption Autopsy Session (45 Minutes)
- Outcome recap (5 min). One person presents the facts. No interpretation, no narrative. What happened, in numbers and observable outcomes.
- Key decisions (5 min). Identify the three to five decisions that most shaped the outcome. Write them on a whiteboard or shared document.
- Assumption extraction (15 min). For each decision, the group answers: What did we believe that made this feel like the right choice? Write every assumption down. Do not filter. Do not defend. Collect.
- Evidence audit (10 min). For each assumption: What evidence supported it? What evidence contradicted it? Did we pay attention to the contradiction?
- Update and commit (10 min). For assumptions that proved inaccurate: What do we believe now? What changes operationally? Name the change. Name the owner. Name the date.
Ground rule: “Assumptions can be wrong without people being bad.” Say it aloud at the start. It sounds performative. It changes the room.
Key Takeaways
- Most failures are assumption failures, not execution failures. The team did the work. The model underneath the work was wrong. Fixing the execution without updating the model produces better execution of the same flawed strategy.
- Name your assumptions before they name your outcomes. An assumption register makes beliefs visible and trackable. It forces you to articulate what you believe, how confident you are, what evidence supports it, and what would change your mind.
- Define disconfirming signals in advance. Decide what it would look like if you were wrong before the evidence arrives. Teams that do this update faster, because they are watching for the signal instead of defending against it.
- Build a blame-free learning culture. Assumptions can be wrong without people being bad. If updating a model carries social cost, people will protect their models instead of testing them — and the organisation will learn slowly or not at all.
The discipline is straightforward: name what you believe, test it, and update when the evidence shifts. That sounds simple, and it is. What makes it rare is not complexity but discomfort — the willingness to say “We were wrong about this, and here is what we believe now,” and to mean it without flinching.
The next place this often breaks down is not in the analysis but in the aftermath. You do the autopsy, you extract the learning — and then the mind keeps returning to it. Replaying the decision, rehearsing what you should have done, running the same mental loop long after the useful information has been extracted. That is rumination. It masquerades as learning, but it is a performance tax. That is where we go next.
If you want help installing an assumption register, building a learning cadence, or creating a team culture where updating models is rewarded rather than punished — that is the work I do.
Get in TouchFrequently Asked Questions
A living document that lists your team’s current assumptions alongside their confidence level, supporting evidence, a disconfirming signal to watch for, and an owner with a review date. It makes invisible beliefs visible and trackable, so they can be updated as evidence arrives rather than defended by default.
A postmortem typically focuses on what happened and who did what. An assumption autopsy adds a layer underneath: it asks what you believed that made those actions feel right. Postmortems fix processes. Autopsies update models. Both are useful, but only the autopsy prevents the same class of error from recurring in a different form.
Structure, not willpower. Define in advance what would count as evidence against your assumption and assign someone to watch for it. When it appears, treat it as valuable information rather than a threat. Reward early truth-telling visibly. The practice shifts the culture incrementally — each time someone names a wrong assumption and the response is curiosity, not blame, the next person finds it easier.
They stay at the action layer. They fix the process, update the checklist, add a sign-off step — and never examine the beliefs that generated the actions in the first place. The result is more efficient execution of the same flawed strategy. Better at doing the wrong thing is not progress.