There is a particular kind of stuckness that shows up in people who are used to getting things right. It does not look like failure from the outside. The work is strong. The decisions are defensible. The track record is genuinely impressive. And yet something is not landing the way it used to. Results that once came reliably now arrive inconsistently, or not at all. The strategy makes sense on paper. The execution is solid. And the outcome is quietly, persistently wrong.

When that happens, most people look at their effort first. Am I working hard enough? Then they look at their skills. Am I missing something technical? Then they look at the people around them. Is my team the problem? What they almost never look at — because it is invisible by design — is the layer underneath all of it: the assumptions they are building on.

Not opinions. Not theories. The things that feel so obvious they do not even register as beliefs. The conclusions you settled on years ago and never revisited because they worked, and working felt like proof.

That is where this starts. Not with a failure of intelligence or effort, but with a failure of maintenance. You have been making decisions on top of premises that were accurate once and may not be anymore. And the distance between what you treat as settled and what the evidence actually supports — that distance has a name. I call it assumption debt.

Assumption debt is the accumulated gap between what you treat as settled truth and what the current evidence actually supports. It builds silently. It compounds over time. And the first symptom is usually a result you cannot explain — despite doing everything “right.”

The Invisible Layer That Shapes Your Results

Every consequential decision you make sits on a foundation of premises. The premises are rarely spoken aloud. They operate like the axioms of an internal operating system — taken as given, built upon, never inspected. You do not decide from first principles each time. You decide from a position that was assembled over years of experience, and that position carries assumptions baked so deeply into it that they feel like common sense rather than conclusions.

These premises cluster into recognisable categories:

None of these are inherently wrong. Most of them were probably accurate when you first adopted them. The problem is not that you hold premises — you must; operating without them would paralyse you. The problem is that some of your premises expired and you did not notice, because the expiry was silent. The world around you shifted. Your internal map did not update. And now you are navigating with directions that no longer connect to where you are actually trying to go.

A blind spot is not a failure of character. It is a navigation chart that was accurate when it was drawn and has not been resurveyed since. The coastline changed. The chart did not.

Here is what this looks like from the inside. The world around you shifts — the people change, the context changes, the demands change — but your internal model of how things work stays fixed. The model was built on real data. It matched your experience beautifully at the time. But experience is not static, and a model that matched last year’s conditions can quietly fail on this year’s without announcing itself. Your conclusions still feel true. They just are not producing true results anymore.

Pattern in Practice

The Trusted Instinct: A leader who built a successful organisation in its early years made most key decisions by feel. They could read a room, read a person, read a situation — and they were right often enough that “trust your gut” became an operating principle. Fifteen years later, the organisation is different. The decisions are more complex. The people involved are more specialised. But the leader is still operating on the premise that their intuition alone is sufficient — because it was, once. Three significant misjudgements later, the premise remains unquestioned. Not because the leader is arrogant, but because the premise is fused with identity. It does not feel like a belief that could be wrong. It feels like a description of who they are.

Why Capable People Are Most Vulnerable

This is the part that tends to sting. The more capable you are, the more exposed you are to assumption debt. Not in spite of your track record, but because of it.

When things have gone well for a long time, past success functions as camouflage. Every time an assumption pays off, it gets a little more embedded, a little harder to see as a belief rather than a fact. The track record provides evidence — real evidence — that the premise works. And so you stop checking. Not out of laziness or overconfidence, but because checking feels unnecessary. The results speak for themselves.

Except results are always lagging indicators. They tell you how your assumptions performed against past conditions. They tell you nothing about whether those assumptions still hold in the present. By the time the results start showing cracks, the assumptions underneath have often been misaligned for months or years.

There is a specific failure mode here, and I see it regularly in people who are genuinely good at what they do. Confidence — which is a legitimate asset — gradually fuses with a particular set of conclusions. “I am good at reading people” starts as an observation based on evidence. Over time, it becomes part of identity. And once a conclusion is part of who you are, updating it does not feel like learning. It feels like losing something.

That is the trap. The gap between the confidence you invest in a premise and the evidence that currently supports it — that gap is where assumption debt lives. Not in the premises that are obviously uncertain. In the ones you feel most sure about. The ones that have stopped feeling like conclusions and started feeling like the ground you walk on.

Pattern in Practice

The Relationship Premise: A person built their closest relationships during a period when they were always available — generous with time, responsive, present. “I show up for people, and they show up for me” was the operating assumption, and it worked. Life got busier. Responsibilities multiplied. The availability dropped, but the premise stayed the same. When a close friendship began to strain, they could not see the connection. They were still the same person, still loyal, still caring. But the assumption — that showing up emotionally was enough, regardless of how often they were actually present — had drifted from the reality of how the relationship worked. The premise was not wrong in principle. It was wrong in its current application. And because it felt like a core value rather than a testable belief, it was the last thing they thought to examine.

The most dangerous assumptions are the ones that were right for a long time. They have the deepest roots. They carry the most accumulated evidence — all of it historical. They feel like hard-won wisdom rather than provisional conclusions with an expiry date.

The Assumption Audit

What follows is not a reflection exercise. Reflection asks “What do I think?” which is a fine question but not sufficient. The Assumption Audit asks something more precise: “What is the evidence quality behind the things I currently treat as settled?” The output is not a feeling of clarity. The output is a recalibrated confidence level attached to each premise that is actively driving your decisions.

Diagnostic Protocol

The Assumption Audit

  1. Assumption Inventory. Identify the five premises currently driving your most consequential decisions. Not your beliefs about the world in general — the specific, operative assumptions behind what you are actually doing right now. Write each one as a plain declarative statement.
    • Premise 1: _____
    • Premise 2: _____
    • Premise 3: _____
    • Premise 4: _____
    • Premise 5: _____
  2. Evidence Rating. For each premise, honestly categorise the evidence supporting it:
    • Direct — You have recent, first-hand data that confirms this premise. Something you observed or experienced in the past few weeks. A conversation. A measurable outcome. Current, specific, yours.
    • Indirect — You have supporting evidence, but it is dated, borrowed, or analogical. “This worked in a previous situation.” Something you read. Something a respected person told you. Reasonable but not current.
    • Felt Sense — You believe this because it feels right, because you have always held this position, or because the people you trust seem to agree. When pressed, no specific evidence comes to mind. Just a deep, settled conviction.
  3. Disconfirmation Search. For each premise, answer one question: What specific observation would prove this wrong? If you cannot articulate a clear disconfirming condition, the premise is not knowledge. It is something closer to faith — and faith is a fine thing to have, but it is a poor foundation for consequential decisions.
  4. Update Rule. Define your threshold: What would have to change for me to revise this premise? A specific pattern of results. A piece of feedback you have not yet received. A situation you have not yet encountered. Write it down. A premise without an update rule is a permanent fixture — and permanent fixtures in a changing environment are exactly how assumption debt accumulates.

The Confidence–Evidence Matrix

Strong Evidence Weak Evidence
High Confidence Calibrated. Your certainty matches the data. This is the target state. Keep monitoring for drift, but no immediate action needed. Assumption Debt. This is where the trouble lives. You feel very sure, but the evidence underneath is thin, dated, or borrowed. Audit this one first.
Low Confidence Under-leveraged. You have real evidence but you are not acting on it. The bottleneck is not information — it is willingness to commit. The data is there. Use it. Open Question. Honest uncertainty. This is appropriate when conditions are genuinely unclear. The right response is to run small experiments, not to pretend you know.

The upper-right quadrant — high confidence, weak evidence — is where assumption debt concentrates. In my experience, most people who are good at what they do have at least two premises sitting there.

Notice the asymmetry. Calibrated positions (strong confidence, strong evidence) just need monitoring. Open questions (low confidence, weak evidence) are honest unknowns. Under-leveraged positions (low confidence, strong evidence) are a courage problem, not an information problem. But that upper-right quadrant — where you feel most certain and the evidence is thinnest — that is the specific failure mode this series addresses. That is where your internal model has locked onto conclusions that matched previous conditions beautifully and may be quietly failing on current ones.

Pattern in Practice

The Pressure Story: “I work best under pressure” is one of the most common assumptions I encounter in people who perform at a high level. Run it through the audit. Evidence rating: felt sense. You remember the times pressure produced strong results. What you do not remember — because you never measured it — is the quality of work produced when you gave yourself adequate time. You rarely allow that condition to exist, so there is no comparison data. Disconfirmation search: you would need to systematically compare output across high-pressure and low-pressure conditions, and you have never done that. Update rule: none, because the premise was adopted as identity (“I am the kind of person who thrives under pressure”), not as a testable proposition. In many cases, what people call “working best under pressure” is actually a pattern of avoidance followed by forced action at the deadline — and the assumption is the story they tell about the pattern, not an accurate description of it.

Calibration: Confidence That Tracks Reality

The point of the Assumption Audit is not less confidence. It is proportional confidence — a state where your certainty about a premise actually reflects the quality of evidence behind it, and adjusts when the evidence changes.

This is worth saying plainly because it gets misread. Calibration is not hesitation. A calibrated person can act decisively under pressure. The difference is that their decisiveness is connected to evidence rather than to habit. They hold strong views — and those views carry explicit conditions for revision. They commit fully to a course of action and they know what they would need to see to change direction. These are not contradictions. They are the characteristics of someone whose internal model is well-maintained.

Poorly calibrated people are not uncertain. They are certain about things that used to be true, or certain about the right things for longer than the evidence supports. And that produces a very specific pattern: the strategy is sound, the execution is competent, the effort is genuine — and the results are consistently off. When you have ruled out effort and ability as explanations, what remains is the assumption layer. Something underneath the visible decisions is misaligned, and it has been misaligned long enough to compound.

Calibration is not the absence of conviction. It is conviction with a maintenance schedule. You would not navigate by a chart that was last updated three years ago. Do not make your most important decisions on premises with the same service history.

A complete calibration framework — including methods for scoring your own prediction accuracy and adjusting confidence in real time — is the subject of Post 4 in this series. For now, the principle: every premise in your operating system should have an evidence grade and an update rule. If it does not have both, it is not a considered position. It is furniture. And furniture does not move when the room changes shape.

Making It Stick: Systems Over Good Intentions

The failure mode of every audit is identical. You do it once. You feel a genuine shift in clarity. And then, within a week, you are back to operating on autopilot. The insight was real. The change did not last. Not because you lacked willpower or seriousness, but because insight without a system to sustain it is just a good conversation with yourself that has no consequences.

Three mechanisms convert the audit from a one-off moment of clarity into something that actually persists:

1. The Five-Minute Pre-Mortem

Before any consequential decision, spend five minutes with one question: “If this goes wrong, what assumption was the problem?” Not “what could go wrong” — that generates a risk list, which is useful but different. This question goes one layer deeper. It targets the premise layer specifically. You are not looking for external obstacles. You are looking for the internal conclusion that, if it turns out to be wrong, makes the entire plan incoherent regardless of how well you execute it.

Five minutes. No special tools. It surfaces the single most vulnerable assumption in the plan before the plan is in motion. The cost of skipping it is invisible until the assumption breaks — at which point the cost is obvious in retrospect and too late to prevent.

2. The Honest Friend

Find one person — a colleague, an advisor, a friend who knows you well — and give them a standing invitation: tell me what I am not seeing. Not in the theatrical devil’s-advocate sense, where someone argues the opposite position for intellectual sport. Something more specific than that. Their job is to identify the premise in your thinking that you are most attached to and least able to question. The premise you defend reflexively. The one that, if challenged, makes you feel something in your chest before a thought even forms in your head.

Their value is directly proportional to their willingness to be uncomfortable — which means the selection criterion is trust and directness, not agreeableness. You need someone who cares about you enough to risk annoying you. The reason this matters structurally, not just socially: you cannot audit your own blind spots with the same instrument that created them. Your perception is the problem. Your perception cannot simultaneously be the solution. An outside perspective is not a nice-to-have. It is an architectural requirement.

3. The Prediction Journal

Before a decision plays out, write down what you expect to happen and why. Include the key assumption behind the prediction. Then, when the outcome arrives, compare it to what you wrote. Not to judge yourself. To collect data on your own accuracy.

Over twenty or thirty entries, a pattern emerges. You will see which categories of assumptions you consistently over-weight or under-weight. You will see where your model drifts. You will have actual evidence about your calibration instead of a self-narrative about it. And that distinction — between “I think I am well-calibrated” and “here is my calibration data” — is the entire point of this series. One is a premise. The other is evidence.

Common Failure Modes

Key Takeaways

The navigation chart analogy holds all the way through. A chart is not wrong because it was drawn carelessly. It is wrong because the coastline changed and nobody went back to check. Your assumptions are the same. They were adequate when you adopted them. Possibly even excellent. The question is whether they have been maintained — and for most people who are good at what they do, the honest answer is that maintenance never happened, because the need for it was never felt. Until it was.

The next question is mechanical: how does your brain actually handle the shift between relying on an assumption and questioning it? That toggle — between operating on your premises and stepping back to examine them — is not a personality trait. It is a cognitive gear shift that can be understood and practised. Knowing how it works makes the audit faster and considerably less effortful.

Series boundary: This post covers the assumption layer. For how your brain toggles between Decision Mode and Discovery Mode when processing assumptions, see Post 2: Two Gears.
Series Index Next: Two Gears →

If you want help identifying the assumptions that are actually running your decisions — and a structured process for bringing them up to date — that is the work I do.

Request Assessment