Most strategic mistakes are model mistakes. Leaders don't fail due to lack of data. They fail because they apply the wrong template to the data.

A mental model is a simplifying template your brain uses to answer: What's happening? Why? What do we do next? Your organization runs models whether you admit it or not. The question is whether those models are explicit, tested, and updated.

This post is the second foundation post. Post 3 established the decision operating system. This post addresses the template layer underneath. Most bias posts that follow become easier to understand once you accept: you don't think from scratch. You run models.

Why Models Exist: Speed and Coordination

Mental models exist because thinking from scratch is expensive. Models save time and reduce uncertainty. They allow teams to act quickly and align around shared assumptions.

A model answers three questions simultaneously: What is this? What does it mean? What should we do? Without models, every decision would require full analysis. Models are compression. Compression is useful, until it isn't.

If a model can't be written down, it can't be tested. If it can't be tested, it becomes dogma.

The Danger: Models Become Dogma

When a model becomes identity, disconfirming evidence gets rejected. The model stops being a hypothesis and becomes a belief. Updating feels like betrayal.

This is how organizations become systematically wrong about things everyone inside considers obvious. The model was once accurate. Conditions changed. The model didn't. Now it generates consistent errors that feel like bad luck.

The Model Loop in Organizations

The organizational model loop works like this:

  1. Signal arrives from the environment
  2. Model interprets the signal ("This means X")
  3. Decision is made based on interpretation
  4. Outcome occurs
  5. Story is created about the outcome
  6. Model strengthens or remains unchanged

The problem: if outcomes are noisy (and they usually are), teams mislearn. A bad model can produce a good outcome through luck, and the model gets reinforced. A good model can produce a bad outcome through variance, and the model gets abandoned.

Three High-Cost Model Categories

Market Model: "Customers Buy Because..."

Every organization has an implicit model of customer behavior. The model explains why customers choose you, what they value, and what would make them leave. If the model is wrong, marketing, product, and pricing decisions will systematically miss.

Talent Model: "Great People Look Like..."

The talent model defines what competence looks like in your organization. It shapes hiring, promotion, and development. If the model is too narrow, you'll screen out people who would succeed. If it's outdated, you'll optimize for yesterday's requirements.

Risk Model: "What Can Kill Us Is..."

The risk model determines where you allocate defensive resources. It shapes what you monitor, what you insure against, and what you ignore. If the model is miscalibrated, you'll over-protect against familiar threats and under-protect against unfamiliar ones.

Pattern in Practice

Go-to-Market Model Drift: A company's model says "outbound drives growth." This was true in 2018. By 2024, inbound dominates their category. But the model persists. They keep hiring SDRs and wonder why CAC rises. The model was never updated because it was never made explicit enough to test.

Model Drift: When the World Changes

Model drift happens when the environment shifts but the model doesn't update. Common triggers:

The environment is always changing. The question is whether your models are updating at the same rate.

Model drift is inevitable. The environment changes. If your model doesn't update, your decisions degrade. The solution is not permanence. It's disciplined revision.

Why Leaders Resist Updating Models

Model updates face resistance because of:

These are human reactions. They're also strategic liabilities. The leader who can update models faster has better information than the leader who cannot.

A Practical Principle: Make Models Explicit and Testable

If you can't write the model as a sentence and name the prediction, it's not governable. Implicit models can't be tested. Untested models can't be improved.

The discipline is simple: state the model, state the prediction, identify the disconfirming signal, run the test, update based on results.

Executive Tool

Model Card + Disconfirming Test

For each critical assumption, create a Model Card:

  1. Model statement: "We believe X leads to Y because Z."
  2. Prediction: "If true, we should observe [specific, measurable outcome]."
  3. Disconfirming signal: "If false, we should observe [specific signal]."
  4. Test: What's the smallest experiment that would teach us?
  5. Decision: What do we do until we learn?
  6. Review date: When will we update the model?
Common Failure Modes
Pattern in Practice

Talent Model Rigidity: A company's implicit model says "top performers are loud and confident." This model shapes who gets promoted and who gets coached out. Meanwhile, quiet high-output operators leave for competitors who recognize their contribution. The model was never examined because it felt like common sense.

The Role of Safety Behaviors in Organizations

Just as individuals have safety behaviors that prevent model updating (avoidance, reassurance seeking), organizations have equivalents:

These behaviors prevent model testing. If you never act, you never learn whether the model was right.

Installing an Update Culture

Governance structures can protect against model rigidity:

Weekly Practices

Objections and Clarifications

"Isn't this just hypothesis testing?"

Yes. That's the point. Most teams don't do it reliably. Making it explicit and systematic is the intervention.

"We move too fast for this."

Then your tests must be small and fast. Speed doesn't excuse implicit assumptions. It makes explicit testing more important.

"What if leadership refuses to update?"

Then your organization has a governance problem, not a strategy problem. The solution is structural, not informational.

Disconfirming signals are gold. Teams don't fail from bad news. They fail from missing news. The update loop is the competitive advantage.

Previous: Concealed Biases Series Index Next: Survivorship Bias and Winner Stories

If your organization's models are drifting or untested, we can audit your core assumptions and build an update loop that keeps decisions calibrated.

Request Assessment

This content is educational and does not constitute business, financial, or medical advice.