A candidate interviews brilliantly. They're articulate, composed, and say exactly the right things. You hire them. Three months later, you're managing a performance problem you never saw coming.
A team sits in a meeting every week and reports green across the board. Milestones hit, no issues flagged. Then a project collapses six weeks before delivery, and the post-mortem reveals problems that existed from month one.
People report "all good" upward while risks quietly accumulate underneath. By the time the truth surfaces, the cost of fixing it has multiplied.
Three different failures. One underlying pattern: someone knew something important that you didn't — and the system you were operating in didn't surface it in time.
In many leadership decisions, the biggest problem isn't intelligence. It's hidden information. Game theory is useful here because it asks three questions that most leaders skip: Who knows what? What are they incentivised to reveal or hide? And how can you design the game so better information shows up earlier?
This post covers information asymmetry — hidden information and hidden action in hiring, leadership, and organisational life. For the full game theory framework, start with the Game Theory primer. For cheap vs costly signals specifically, see Signalling, Reputation, and Trust.
What Information Asymmetry Is
Information asymmetry exists when one side knows something important that the other side doesn't.
That's the whole concept. No complexity required. One person has information that would change the other person's decision — and the other person doesn't have it.
It's everywhere in professional life:
- A candidate knows their real capabilities, discipline, and weaknesses better than you do
- An employee knows their actual effort level, attention, and risk-taking better than their manager does
- A manager knows strategic context — budget cuts coming, restructures planned — better than the team does
- A vendor knows the real limits of their quality and capacity better than the buyer does
When information is uneven, decisions get distorted. Bad hires happen. Trust gets misplaced or withheld. Problems stay hidden until they're expensive. Leaders either over-monitor or under-monitor, and both make things worse.
The question isn't whether information asymmetry exists in your organisation. It does. The question is what you're doing about the game design.
Two Core Problems Leaders Face
Game theory splits information asymmetry into two distinct problems. They look similar on the surface, but they require different responses — and confusing them is one of the most common leadership mistakes.
Adverse Selection: The Problem Before You Decide
Adverse selection means you choose before you fully know what you're choosing.
The classic case is hiring. A candidate sits across from you and presents a version of themselves. They've rehearsed. They've selected the stories that show them at their best. They've hidden the gaps, the failures, the patterns you'd want to know about. This isn't necessarily deception — it's what the game rewards. Interviews test interview performance. The correlation with job performance is weaker than most leaders believe.
Adverse selection also shows up in vendor selection, partnership decisions, promotions based on perceived readiness, and any decision where the person being evaluated controls much of the information you're evaluating them with.
The danger of adverse selection isn't dishonest people. It's that the screening process itself rewards the wrong qualities. Polish gets over-weighted. Proof gets under-weighted. And urgency collapses whatever standards you had.
Hidden Action: The Problem After You Decide
Once someone is hired, promoted, or trusted with a task, a different game begins. Now you can't fully observe what they're doing — their effort, their attention, the quality of their judgment, the shortcuts they're taking, and the risks they're sitting on.
This is the principal-agent problem in plain English: the person acting on your behalf has information about their own behaviour that you don't have access to.
It shows up constantly:
- People optimise metrics instead of outcomes
- Risks stay hidden until they're impossible to hide
- "Status green" reporting masks real issues because honesty has consequences
- Remote work paranoia leads to surveillance that measures activity instead of results
- Cross-team blame-shifting becomes the path of least resistance
Leaders don't just need better judgment. They need better game design.
Adverse selection is a screening problem. Hidden action is an incentive and monitoring problem. Both are information asymmetry, but the tools for each are different. Fix the wrong one and you've wasted your effort.
Hiring as a Game of Hidden Information
Hiring is a selection game under uncertainty. The candidate is trying to signal value. You're trying to screen for true fit and reliability. Both sides have incentives to present selectively. This doesn't make either side dishonest — it makes the game structurally prone to error.
Here's where most hiring processes go wrong:
Overweighting Cheap Signals
Confidence is free to produce. Polish costs nothing but practice. Prestigious labels — the right school, the right company on the resume — carry borrowed credibility. "Culture fit" often means "this person feels familiar," which is a vibe, not evidence. As we covered in the signalling post, cheap signals are easy to produce regardless of whether the underlying quality is real. They should carry correspondingly less weight.
Underweighting Costly Signals
Preparation depth — whether someone actually researched your organisation, your challenges, and your context — is costly to fake. Specificity of examples matters: vague generalities are easy; exact details with constraints, trade-offs, and mistakes are hard to fabricate. Follow-through on small pre-hire tasks reveals reliability in a way that interviews cannot. The quality of questions a candidate asks tells you more about their thinking than the answers they give.
Bad Screening Design
Most interviews test the ability to interview. They reward verbal fluency, impression management, and social calibration — qualities that may or may not predict performance in the actual role. If your screening process doesn't test the real work, you're selecting for interview skill and hoping it correlates with job skill. Sometimes it does. Often enough, it doesn't.
Rushed Hiring
Urgency is the greatest friend of adverse selection. When a role is critical and the seat is empty, standards compress. Red flags get reframed as "manageable concerns." The cost of a bad hire — months of underperformance, management time, team disruption, eventual replacement — always exceeds the cost of waiting another two weeks for a better screen.
Screening: How to Surface Better Information
You can't eliminate information asymmetry. The candidate will always know more about themselves than you do. But you can design screens that make important information harder to hide and more likely to surface.
Test Real Work, Not Verbal Skill
Work samples, simulations, case tasks, role-relevant problem-solving exercises. These cost more to administer than a conversation, but they test the thing that matters. A thirty-minute work sample predicts job performance better than a sixty-minute unstructured interview. This isn't controversial in the research. It's just inconvenient.
Look for Specificity Under Questioning
When someone describes a past success, push into the details. Ask for exact examples, the constraints they faced, the trade-offs they made, and the mistakes they corrected along the way. Specificity is harder to fake than polished generalities. The person who can describe what went wrong and how they adjusted is giving you a costly signal — it requires real experience to produce.
Use Consistency Checks
Do the person's examples, references, timeline, and claims align? Inconsistencies don't always mean dishonesty — but they're worth following. A pattern of small misalignments across multiple data points is more informative than any single answer.
Test Follow-Through
Even small pre-hire tasks reveal patterns that interviews hide: responsiveness, reliability, attention to detail, ownership. If someone misses a deadline or ignores instructions during the hiring process — when they're maximally motivated to impress — that's information. Weight it accordingly.
Screen for Incentive Fit
What motivates this person? What environment helps their performance and what environment hurts it? If someone thrives with autonomy and you're hiring them into a heavily structured, oversight-intensive role, the game will eventually break — no matter how good the interview was.
Work sample before the final round — a realistic task that mirrors actual job demands.
Brief written summary — can they communicate clearly in the medium the job actually requires?
Scenario with conflicting priorities — how do they think through trade-offs when there's no clean answer?
Reference checks that ask for patterns — not "Was this person good?" but "What did their performance look like under pressure? When things went wrong, what did they do first?"
Leadership After Hiring: The Hidden-Action Problem
Hiring is the first game. Leadership is the longer one.
After the decision is made — someone is hired, promoted, or entrusted with a project — the information asymmetry shifts. Now it's less about what they are and more about what they do. Effort, quality, shortcuts, honesty, risk-taking — these are hard to observe directly, especially at scale and especially remotely.
The most common leadership response to this problem falls into one of two traps:
Naive trust: "I hired good people, I'm sure it's fine." This works when it works. When it doesn't, the leader is the last to know.
Excessive control: Monitor everything, check everything, require approval for everything. This feels like diligence but often produces the opposite of what it intends — compliance theatre, reduced ownership, and people who perform for the audit rather than for the outcome.
A better approach, from a game theory perspective, is to stop asking "Can I trust them?" and start asking different questions:
- What behaviours does my current system incentivise?
- What information surfaces early versus late?
- What signals in this environment are costly and hard to fake?
- What reporting structure reveals reality rather than managing impressions?
This reframes leadership from a character-judgment exercise to a game-design exercise. You're not trying to read minds. You're trying to build a system where good information flows naturally.
Incentive Design: You Get the Behaviour the Game Rewards
People are strongly shaped by incentives, especially under pressure. This isn't cynicism — it's physics. If you reward appearances, you'll get appearances. If you reward truth and ownership, you'll get better information. The design of the game determines the quality of the play.
Metric Gaming
When metrics become targets, people optimise the metric — not the outcome the metric was meant to represent. Vanity KPIs get inflated. Activity gets rewarded over results. Status reports stay "green" while actual risk climbs. The numbers look excellent right up until the moment the project fails. This doesn't mean metrics are useless. It means that any metric people know they're being judged on will eventually get gamed, and you need to account for that.
Delayed Punishment for Honesty
If people get punished for surfacing problems, they'll hide them. Not because they're bad people, but because the game taught them to. The leader who reacts to bad news with visible frustration — even once — has changed the game for everyone watching. The next problem will arrive later, bigger, and more expensive.
Misaligned Cross-Team Incentives
Sales optimises for closing. Delivery optimises for quality. Operations optimises for efficiency. Clinical optimises for outcomes. Commercial optimises for revenue. When these teams are incentivised independently, the gaps between them become hiding places for problems. Nobody owns the space between the teams, and that's exactly where things break.
Rewarding Repair and Early Truth
Strong systems reward early escalation, issue ownership, transparent trade-offs, and correction speed. The question for any incentive system is: does this reward the behaviour I actually want, or does it reward the behaviour that looks like what I want? The gap between those two is where information asymmetry thrives.
If you keep being surprised by the same kinds of failures — late-stage project problems, performance issues that "nobody saw coming," risks that were hidden until they were crises — the problem is almost never that people are uniquely dishonest. The problem is that the game you designed rewards hiding and punishes disclosure.
Monitoring vs Trust: Designing for Visibility
This is a real tension, and pretending it isn't doesn't help. Leaders need information to make good decisions. But heavy monitoring can reduce ownership, increase political behaviour, signal distrust, and produce compliance theatre. The answer isn't to pick a side. It's to design smarter visibility.
Define Clear Outputs and Standards
Ambiguity is the breeding ground for information asymmetry. When people don't know exactly what "good" looks like, they optimise for what feels safe — usually activity, visibility, and impression management. Clear outputs and clear standards reduce the space where hidden action operates.
Create Structured Reporting
Short, consistent reporting formats reveal patterns better than random check-ins. A weekly five-line update with the same structure every time creates a track record you can read across weeks. Anomalies become visible. Trends become readable. And the format itself normalises transparency.
Ask for Risks, Not Just Status
"What's off track?" should be a normal question, not a career-threatening one. If the only question you ask is "How's it going?" the only answer you'll get is "Fine." Build risk-surfacing into the structure of every update, every review, every meeting.
Separate Discovery from Punishment
If every problem report triggers blame, information quality collapses. People learn fast. One leader who responds to a surfaced problem with "Good — now let's fix it" gets better information than ten leaders who respond with "How did you let this happen?" The cost of surfacing truth must be lower than the cost of hiding it, or the game will reliably produce hiding.
Use Strategic Spot Checks
Spot checks and audits aren't about catching people out. They're about maintaining the credibility of the system. Enough verification to keep the game honest; not so much that people perform only for the audit. The knowledge that checks happen — unpredictably, occasionally, without malice — changes behaviour more effectively than constant surveillance.
- Hiring for polish instead of proof — weighting the interview over the work sample
- Rushing decisions because of urgency — letting time pressure collapse screening standards
- Confusing trust with lack of verification — assuming good intentions means no structure is needed
- Building systems that punish bad news — reacting to problems with blame, teaching people to hide
- Over-monitoring visible activity — measuring hours, emails, and presence instead of output
- Rewarding outcomes while ignoring process risk — celebrating results without asking how they were achieved
- Using vague performance language — "step up," "take more ownership," "be more proactive" without concrete standards
- Ignoring incentive conflicts between teams — letting sales, delivery, and operations optimise against each other
- Expecting honesty without making it safe — wanting truth but creating consequences for truth-tellers
- Repeating the same "surprise" failures without redesigning the game — treating systemic problems as individual ones
The Information Asymmetry Audit
Use this for any decision where hidden information could distort the outcome — hiring, promotion, delegation, vendor selection, performance management.
- Name the decision. What game are you playing? Hiring, promoting, delegating, trusting, selecting a vendor, managing performance? Label it precisely.
- Map the asymmetry. Who knows what that you don't? What do you know that they don't? Where is the information gap widest?
- Identify what's hidden but high-stakes. Reliability? Judgment under pressure? Effort level? Motives? Capacity? Risk exposure? Not all hidden information matters equally — focus on what would change your decision if you knew it.
- Identify your current signals. Which signals are you relying on? Which are cheap (easy to produce regardless of quality) and which are costly (hard to fake)? What are you overweighting?
- Improve the screen (before the decision). What test, work sample, verification, or structured probe would surface better information before you commit?
- Improve the game design (after the decision). How will you structure incentives, reporting, accountability, and review cycles so that useful information emerges early rather than late?
- Make truth safer and more useful. How do you reduce the cost of surfacing problems? What would make honesty the easier path?
- Review lagging surprises. What bad surprises keep recurring? What does that pattern reveal about your system's blind spots? If the same type of failure keeps appearing, the game design is producing it.
Four Cases
A senior hire interviews beautifully. Articulate, confident, all the right stories. References are positive but generic. The team is under pressure to fill the role, so the process moves fast. Three months in, the pattern emerges: strong in meetings, weak on follow-through. Good at framing work, poor at completing it. The gap between presentation and execution was there all along — the screen just didn't test for it.
Diagnosis: Adverse selection. Cheap signals (charisma, verbal fluency) were overweighted. Costly signals (work samples, consistency checks, specific reference patterns) were skipped or rushed. Urgency collapsed standards.
Better game: A role-relevant work sample before the final round. References asked for concrete patterns rather than adjectives. A small follow-through task between interviews to test reliability. A better screen beats a better intuition.
A manager runs weekly status meetings. Every team reports green — on track, no issues. Six weeks before a major delivery, the project implodes. The post-mortem reveals that at least three teams knew about significant problems from month two. Nobody raised them.
Diagnosis: Hidden action combined with an incentive to avoid bad news. The reporting game rewarded optimism. Problems that got surfaced were met with frustration and extra scrutiny, so the system taught people that silence was safer than honesty.
Better game: A mandatory risk field in every status report. Explicit trade-off discussions built into the meeting structure. No punishment for early issue surfacing. Regular evidence-based reviews — not just verbal status, but deliverables against milestones. Information quality is designed, not wished for.
A leader shifts to remote management and feels uncertain about what people are doing. They increase check-ins, add time-tracking, require detailed daily updates. The team responds by optimising visible activity — long Slack presence, fast email replies, detailed reports on minor tasks. Actual output drops. Engagement drops further. The leader sees low performance and increases monitoring again. The spiral continues.
Diagnosis: Over-monitoring signals distrust. The team optimised for visibility rather than outcomes. Hidden action didn't disappear — it shifted from effort-hiding to performance theatre. The monitoring system measured the wrong thing and created the problem it was trying to solve.
Better game: Output-based expectations with clear deliverables. Weekly reviews focused on what was produced and what's blocked — not how many hours were logged. Autonomy within defined boundaries. Selective, infrequent spot checks. Over-monitoring often hides the real problem: weak game design.
A company selects a service provider based on a strong pitch, an impressive portfolio, and a competitive price. Delivery begins, and quality falls short almost immediately. Deadlines slip. Communication becomes evasive. By month three, the company is managing the vendor more than the vendor is managing the work.
Diagnosis: Pre-selection signalling failure — the pitch was a cheap signal. Contract incentives were weak (large upfront payment, no milestone gates). No early performance checkpoints. The game rewarded the vendor for winning the contract, not for delivering on it.
Better game: Milestone-based payments tied to deliverables. Proof of prior work verified independently, not just shown in a deck. References from similar-scale clients. Staged commitment — a paid pilot before a full contract. Exit clauses that make switching possible. Information asymmetry applies to vendors too, not just employees.
When to Increase Trust vs Increase Structure
When information is poor, leaders often feel forced to choose between "trust more" and "control more." Neither is a strategy. Both are reactions. The better question is: what structure would produce better information and better incentives?
Increase trust and autonomy when:
- Signals are strong and costly — the evidence is hard to fake and consistently positive
- Patterns are consistent over time — not just a good week but a reliable track record
- Ownership is visible — the person takes initiative, flags problems early, corrects without being asked
- Corrections happen quickly — when something goes wrong, they fix it before you have to intervene
Increase structure and screening when:
- Surprises keep recurring — the same type of problem appears repeatedly
- Signals are mostly cheap — you're getting reassurance but not evidence
- Incentives encourage hiding — the system makes honesty costly and concealment easy
- Accountability is unclear — nobody owns the outcome, so nobody owns the risk
This isn't binary. You can extend trust in areas where signals are strong while adding structure in areas where they're weak. The goal is calibration, not ideology. Trust is earned through costly signals and sustained patterns. Structure is appropriate wherever the information gap is high and the stakes justify the investment.
Reflection Prompts
Sit with these. Write on them if you can. The value isn't in the question — it's in what surfaces when you stop and actually answer honestly.
- Where in your work are you making decisions with hidden information — and pretending you're not?
- Which cheap signals do you overweight in hiring or delegation? What costly signals do you skip?
- What bad surprise keeps recurring in your team or organisation? What does the pattern tell you about your screening or incentive design?
- What does your current system actually reward — truth, appearance, speed, or blame avoidance?
- Where have you confused "trust" with "no screening"?
- Where have you confused "control" with "good leadership"?
- What single screening improvement would make your next hire or vendor decision meaningfully better?
- What single reporting change would surface risk earlier in your team?
- What truth is currently too costly for people to tell you?
- What game design change would improve information quality in your organisation this month?
The Game You're Already Playing
Hidden information is not a leadership failure. It's a built-in feature of every organisation, every hire, every delegation, every partnership. The question isn't whether information asymmetry exists — it always does. The question is whether your systems are designed to surface truth or suppress it.
Most leaders try to solve this with better judgment — sharper instincts, harder questions, more experience. And judgment matters. But judgment operates inside a game. If the game rewards hiding, punishes honesty, and screens for the wrong signals, even excellent judgment will be working with contaminated data.
Improve your next decision by upgrading the screen and the incentives, not just your intuition. Design the game so that better information is the path of least resistance — for you, for your team, and for the people you're evaluating.
Even with better information flowing, there's a separate problem: getting groups of people to coordinate on better patterns once they can see them. That's the challenge of coordination games — and it's where we go next. Because knowing the truth and acting on it collectively are two very different games.
If you keep being surprised by hires, team performance, or hidden problems surfacing too late, we can help you redesign the information game — better screens, better incentives, better decisions.
Take the Next StepThis content is educational and does not constitute business, financial, or medical advice.