Most strategic mistakes are model mistakes. Leaders don't fail due to lack of data. They fail because they apply the wrong template to the data.
A mental model is a simplifying template your brain uses to answer: What's happening? Why? What do we do next? Your organization runs models whether you admit it or not. The question is whether those models are explicit, tested, and updated.
This post is the second foundation post. Post 3 established the decision operating system. This post addresses the template layer underneath. Most bias posts that follow become easier to understand once you accept: you don't think from scratch. You run models.
Why Models Exist: Speed and Coordination
Mental models exist because thinking from scratch is expensive. Models save time and reduce uncertainty. They allow teams to act quickly and align around shared assumptions.
A model answers three questions simultaneously: What is this? What does it mean? What should we do? Without models, every decision would require full analysis. Models are compression. Compression is useful, until it isn't.
If a model can't be written down, it can't be tested. If it can't be tested, it becomes dogma.
The Danger: Models Become Dogma
When a model becomes identity, disconfirming evidence gets rejected. The model stops being a hypothesis and becomes a belief. Updating feels like betrayal.
This is how organizations become systematically wrong about things everyone inside considers obvious. The model was once accurate. Conditions changed. The model didn't. Now it generates consistent errors that feel like bad luck.
The Model Loop in Organizations
The organizational model loop works like this:
- Signal arrives from the environment
- Model interprets the signal ("This means X")
- Decision is made based on interpretation
- Outcome occurs
- Story is created about the outcome
- Model strengthens or remains unchanged
The problem: if outcomes are noisy (and they usually are), teams mislearn. A bad model can produce a good outcome through luck, and the model gets reinforced. A good model can produce a bad outcome through variance, and the model gets abandoned.
Three High-Cost Model Categories
Market Model: "Customers Buy Because..."
Every organization has an implicit model of customer behavior. The model explains why customers choose you, what they value, and what would make them leave. If the model is wrong, marketing, product, and pricing decisions will systematically miss.
Talent Model: "Great People Look Like..."
The talent model defines what competence looks like in your organization. It shapes hiring, promotion, and development. If the model is too narrow, you'll screen out people who would succeed. If it's outdated, you'll optimize for yesterday's requirements.
Risk Model: "What Can Kill Us Is..."
The risk model determines where you allocate defensive resources. It shapes what you monitor, what you insure against, and what you ignore. If the model is miscalibrated, you'll over-protect against familiar threats and under-protect against unfamiliar ones.
Go-to-Market Model Drift: A company's model says "outbound drives growth." This was true in 2018. By 2024, inbound dominates their category. But the model persists. They keep hiring SDRs and wonder why CAC rises. The model was never updated because it was never made explicit enough to test.
Model Drift: When the World Changes
Model drift happens when the environment shifts but the model doesn't update. Common triggers:
- Channel shifts: How customers find and evaluate solutions changes.
- Competitor moves: A new player changes the incentive structure.
- Regulation changes: What was allowed becomes restricted, or vice versa.
- Team composition changes: What the team can execute shifts.
The environment is always changing. The question is whether your models are updating at the same rate.
Model drift is inevitable. The environment changes. If your model doesn't update, your decisions degrade. The solution is not permanence. It's disciplined revision.
Why Leaders Resist Updating Models
Model updates face resistance because of:
- Sunk identity: The model is tied to the leader's narrative of how they succeeded.
- Fear of looking wrong: Updating implies the previous model was incorrect.
- Loss of status: The old model may have been the leader's competitive advantage.
- Cognitive ease: Running the old playbook requires less effort than building a new one.
These are human reactions. They're also strategic liabilities. The leader who can update models faster has better information than the leader who cannot.
A Practical Principle: Make Models Explicit and Testable
If you can't write the model as a sentence and name the prediction, it's not governable. Implicit models can't be tested. Untested models can't be improved.
The discipline is simple: state the model, state the prediction, identify the disconfirming signal, run the test, update based on results.
Model Card + Disconfirming Test
For each critical assumption, create a Model Card:
- Model statement: "We believe X leads to Y because Z."
- Prediction: "If true, we should observe [specific, measurable outcome]."
- Disconfirming signal: "If false, we should observe [specific signal]."
- Test: What's the smallest experiment that would teach us?
- Decision: What do we do until we learn?
- Review date: When will we update the model?
- Tests that are too slow to produce actionable learning
- No disconfirming signal, making the model unfalsifiable
- Politics determining which models get tested (only "safe" models)
- Review dates ignored, so models drift without update
Talent Model Rigidity: A company's implicit model says "top performers are loud and confident." This model shapes who gets promoted and who gets coached out. Meanwhile, quiet high-output operators leave for competitors who recognize their contribution. The model was never examined because it felt like common sense.
The Role of Safety Behaviors in Organizations
Just as individuals have safety behaviors that prevent model updating (avoidance, reassurance seeking), organizations have equivalents:
- Endless meetings: Create the feeling of progress without the risk of decision.
- Over-analysis: Delay action until certainty (which never arrives).
- Avoiding dissent: Maintain consensus comfort at the cost of accuracy.
- Delaying hard choices: Preserve optionality until external pressure forces decision.
These behaviors prevent model testing. If you never act, you never learn whether the model was right.
Installing an Update Culture
Governance structures can protect against model rigidity:
- Reward disconfirming data: Make it visible when someone's skepticism improved a decision.
- Celebrate model updates: Frame updating as learning, not failure.
- Keep a model changelog: Track what assumptions changed and why.
- Make it safe to say "I'm not sure yet": Uncertainty should be acceptable, not punished.
Weekly Practices
- One Model Card per week: Pick a core assumption and make it explicit.
- One disconfirming question per leadership meeting: Ask "What would change our mind about this?"
- Monthly model review: Revisit three core assumptions and check them against evidence.
Objections and Clarifications
"Isn't this just hypothesis testing?"
Yes. That's the point. Most teams don't do it reliably. Making it explicit and systematic is the intervention.
"We move too fast for this."
Then your tests must be small and fast. Speed doesn't excuse implicit assumptions. It makes explicit testing more important.
"What if leadership refuses to update?"
Then your organization has a governance problem, not a strategy problem. The solution is structural, not informational.
Disconfirming signals are gold. Teams don't fail from bad news. They fail from missing news. The update loop is the competitive advantage.
If your organization's models are drifting or untested, we can audit your core assumptions and build an update loop that keeps decisions calibrated.
Request AssessmentThis content is educational and does not constitute business, financial, or medical advice.