“This project will take three months.” Six months later, it still isn’t finished. The budget has doubled. The team is surprised. But nobody should be. Because exactly the same thing happened with the five projects before this one.
This is not bad luck. This is not poor planning. This is optimism bias at work. And it is so persistent, so universal and so invisible that even the most experienced project managers fall into the trap time and again. Not because they are incompetent, but because the brain systematically paints reality in rosier colours than it deserves.
Optimism bias is the systematic tendency to overestimate positive outcomes and underestimate negative ones. It is closely related to the planning fallacy described by Kahneman and Tversky. In the workplace, it causes teams to underestimate timelines, overshoot budgets and ignore risks that should be built into planning. The solution is not more enthusiasm or harder work, but redesigning planning and decision-making processes. The SUE Influence Framework reveals why optimism feels so attractive - and why it is so difficult to address structurally.
What is optimism bias?
In 1980, psychologist Neil Weinstein published a study showing that people systematically believe they have a lower chance of negative events than others - and a higher chance of positive ones. He called this “unrealistic optimism about future life events”.[1] His original examples covered illnesses, divorces and accidents. But the same distortion turns out to be just as powerful when it comes to project deadlines, budget estimates and market forecasts.
Neuroscientist Tali Sharot demonstrated in 2011 that optimism bias is deeply embedded in the brain. People update their expectations asymmetrically: good news is absorbed easily, while bad news is systematically ignored or dismissed as an exception. It is a System 1 process - fast, automatic and entirely below the threshold of conscious thought.
In the workplace, optimism bias shows up in three forms I see repeatedly:
Timeline optimism. We estimate how long something will take based on how it would unfold in a perfect world: no unexpected obstacles, no sick leave, no dependencies that slip. That scenario almost never exists.
Budget optimism. The initial budget reflects the best version of the project. Complications, revisions and scope creep are not priced in because they simply are not on the radar when the estimate is made.
Risk blindness. We see our own project as the exception to the rule. Others overrun, others fail - but we have a better team, a clearer approach, better circumstances. That conviction is rarely grounded in evidence.
Optimism is not a virtue when it blinds you to reality.
Three scenarios
The IT migration that was “guaranteed” to take six months
A large financial institution decides to migrate its legacy systems to a new cloud platform. The project team estimates: six months. The CTO is enthusiastic. The management team is enthusiastic. Everyone is enthusiastic. The plan looks tight on paper.
Eighteen months later, the migration is still not complete. The budget has been exceeded by 140%. Three key employees have left the organisation out of frustration. And the strange thing is: if you ask everyone who was involved in the original planning, they actually knew it would probably take longer. But nobody said it out loud.
This is not an anecdote. This is the pattern. The NHS in the United Kingdom launched a national IT programme for healthcare in 2003 with an estimated price tag of £2.3 billion. When the programme was abandoned in 2011, costs had grown to more than £12 billion. The project had never achieved its intended functionality. The Sydney Opera House was delivered ten years after the original deadline and cost fourteen times the original budget. The Dutch Betuwelijn freight railway, designed to move goods from Rotterdam to Germany, ended up at more than double the original estimate. These are not incidents. This is the system.
The startup that was going to be the exception to the 90% rule
Ninety percent of startups fail. That is not opinion, that is data. But if you walk into a room full of startup founders and ask who expects to fail, almost nobody raises their hand. Every single one of them believes they are the exception. And they all have the same reasons: a better product, a stronger team, a deeper market insight, better timing.
This is optimism bias in its purest form. It does not feel irrational - it feels like confidence. It feels like the kind of unshakeable conviction you need to build a company. And perhaps a degree of optimism is genuinely necessary to survive the uncertainty of entrepreneurship.
But the problem is what happens when that optimism infiltrates business planning. Market size gets overestimated. Timelines to reach profitability get underestimated. Competitive threats get explained away with the conviction that the product is so good the market will simply follow. Investors are attracted on the basis of projections that have no historical grounding whatsoever. And when reality disappoints, the explanation is always external: the market wasn’t ready, the economy was bad, a competitor did something unexpected.
It is not a lack of intelligence. It is a cognitive mechanism that structurally filters out unfavourable information.
The strategic plan that systematically underestimated competitive threats
An established company in the media sector is working on a five-year plan. The market is shifting towards digital, but the management team believes their brand, their client relationships and their editorial quality protect them. The competitive analysis is thorough - on paper. But the assumptions about market share are based on what the company hopes to achieve, not on historical benchmarks from comparable market transitions.
Five years later, a newcomer with a fraction of the budget and a completely different business model has captured the vast majority of the digital market. The established company has gone through three restructurings. The signals were there in year one. They were literally in the market data the strategy team had analysed. But they were inconsistent with the story the team was telling itself, and so they were explained away rather than taken seriously.
This is how optimism bias and confirmation bias reinforce each other in strategic planning. Optimism bias causes you to underestimate the threat. Confirmation bias then ensures you only look for data that confirms that underestimation.
Why optimism stays so attractive: an IF analysis
When you analyse optimism bias through the SUE Influence Framework, you immediately understand why awareness alone cannot solve the problem. The forces that keep people locked in optimistic thinking are roughly twice as strong as the forces pulling them towards more realistic planning.
Comforts are the forces that hold people in their current behaviour. And this is where optimism bias is strongest. Optimism feels good. It generates energy. It attracts investors, motivates teams, wins management approval. An ambitious but streamlined plan communicates confidence. A realistic plan - with explicit buffers for what can go wrong - can feel like a lack of ambition, even defeatism. The social reward for optimism is enormous. So are the social costs of being openly pessimistic.
On top of that, it is cognitively more efficient to plan based on how you hope things will go than to systematically inventory everything that could go wrong. The brain takes the shortest path.
Anxieties compound this. Being realistic about risks feels threatening. What if your boss thinks you have no confidence in the project? What if investors back away if you are more honest about the probability of failure? What if your team loses motivation when you emphasise how difficult it is going to be? The fear of the consequences of realism traps people in optimistic narratives - even when they know deep down that the plan is too tight.
Pains are the losses people want to avoid. The real pains of optimism bias - cost overruns, delayed projects, frustrated teams, damaged reputations - are abstract and in the future at the moment the plan is being made. They feel less immediate than the direct comfort of an ambitious but compact schedule. People systematically discount future pain: that will happen to the version of you who has to deal with it later, not the version of you right now.
Gains are the benefits of new behaviour. The advantages of more realistic planning - better resource allocation, fewer fires to fight, greater stakeholder trust - are real. But they only become visible once the project is over. At the moment of planning, they are too abstract to override the Comforts of optimism.
The conclusion is uncomfortable but clear: you cannot convince people to be less optimistic by informing them about optimism bias. That solves nothing. You need to redesign the planning environment so that realism becomes the path of least resistance.
Five interventions that actually work
These are not recommendations in the category of “be more aware” or “ask more questions”. These are structural environmental interventions that do not eliminate optimism bias but do systematically constrain it.
1. Reference class forecasting. This is the most powerful and best-supported intervention. Developed by Daniel Kahneman and further refined by infrastructure researcher Bent Flyvbjerg: do not estimate your own project, look at a class of comparable projects. If 70% of IT migrations of comparable scale end up 40 to 150% over budget, that is your starting point. Not your own optimistic estimate. This shifts the question from “how long do we think it will take?” to “how long did comparable projects historically take?”
2. Pre-mortems. Before a project begins, ask your team: “Imagine it is two years from now and the project has failed. What went wrong?” This technique, popularised by Kahneman, breaks the group dynamic of collective optimism. It gives people social permission to raise doubts without being seen as a negativist. In a normal planning meeting, saying the project is too ambitious is socially unwelcome. In a pre-mortem, it is exactly what is being asked of you.
3. The outside view. Ask someone who is not emotionally invested in the project to review the plan. An internal project reviewer, an external consultant, or simply a colleague from a different department. People who are not inside the project have no stake in optimism. They spot gaps in assumptions faster.
4. Mandatory buffers. Build it in as an organisational norm: every timeline and budget estimate automatically receives a buffer of at least 30%. Not as an opt-in, but as the default. This is an environmental intervention: you change the default so that realism becomes the path of least resistance. Teams that want to remove buffers must actively justify that - instead of teams that want to add buffers having to justify it.
5. Independent project audits. For large projects: require periodic audits by people who are not involved in the project and have no stake in a positive verdict. This is comparable to how financial institutions require external auditors. The value is not the audit itself, but knowing it is coming: it disciplines planning from the outset.
How optimism bias connects to other biases
Optimism bias rarely operates alone. In the workplace it combines with and amplifies other biases in ways that make it particularly difficult to recognise and break.
The Dunning-Kruger effect is the most direct amplifier. Overestimating your own capabilities leads directly to underestimating the complexity of the task. If you think you are better than you are, you automatically also think it will be easier than it is.
Confirmation bias then handles the information filtering that keeps optimism bias in place. You look for evidence that confirms your project is on track. You ignore or rationalise signals that suggest otherwise. The two biases reinforce each other in a cycle that is difficult to break from the inside.
Availability heuristic also plays a role: people estimate the probability of a risk based on how easily they can recall examples of it. If you have never seen a large-scale IT project fail within your own organisation, you automatically underestimate the chance it will happen to your project.
Anchoring bias worsens planning optimism: the first estimate stated in a planning meeting acts as an anchor for all subsequent discussion. If the first estimate is optimistic - and it almost always is - the entire group anchors to that optimism.
Understanding this is practical, not academic. Interventions that address only optimism bias often fail because the other biases fill the space. A project audit combined with reference class forecasting and a mandatory pre-mortem creates a system of interlocking safeguards that is far more robust than any single intervention.
Frequently asked questions
What exactly is optimism bias?
Optimism bias is the systematic tendency to believe that your chances of positive outcomes are greater than those of others, and your chances of negative outcomes are smaller. At work, this shows up as underestimating project timelines and costs, overestimating the chances of success, and ignoring risks that should be priced into planning.
What is the planning fallacy?
The planning fallacy is a specific manifestation of optimism bias, described by Kahneman and Tversky in 1979. It is the tendency to underestimate projects in terms of time, cost and risk, while simultaneously ignoring comparable projects from the past. The solution Kahneman recommends is reference class forecasting: instead of estimating your own project, look at how similar projects have performed historically.
How does optimism bias differ from confidence?
Confidence is an accurate assessment of your own capabilities. Optimism bias is a systematic distortion that produces unrealistic expectations, independent of actual skill. An experienced project manager with healthy confidence knows that IT migrations always take longer than planned and builds that in. Someone with optimism bias believes this project will be the exception.
Can you eliminate optimism bias?
No. Optimism bias is a System 1 process that cannot be trained away. Awareness barely helps - even people who know about the bias remain susceptible. The solution lies in redesigning planning processes: reference class forecasting, pre-mortems, independent audits and mandatory buffers build in structural protection.
What is reference class forecasting?
Reference class forecasting is a technique to correct optimism bias in project planning. Instead of estimating your own project based on what you hope to achieve, you look at a class of comparable projects and use their historical performance as a baseline. If 80% of IT migrations end up 40 to 200% over budget, that is your starting point - not your own optimistic estimate.
Conclusion
Optimism bias may be the most costly cognitive error organisations make on a structural basis. Not because people are unintelligent or plan carelessly, but because the brain is simply built this way. The bias does not feel like an error - it feels like ambition, like confidence, like leadership. And that is precisely what makes it so difficult to break.
The solution is not to be less ambitious. The solution is to design your planning processes so that realism becomes the default rather than the exception. Reference class forecasting, pre-mortems, independent audits and mandatory buffers are not signs of a lack of confidence. They are signs of professional maturity.
Want to learn how to structurally improve decision-making in your organisation? In the Behavioural Design Fundamentals Course you learn to apply the Influence Framework and the SWAC Tool to diagnose and overcome cognitive biases. Rated 9.7 by 5,000+ alumni from 45 countries.
PS
At SUE, our mission is to harness the power of behavioural psychology to help people make better choices. Optimism bias may be the most seductive enemy of sound project management, precisely because it feels like a strength. But there is a fundamental difference between optimism as an engine - the energy to start something - and optimism as a blindfold, the conviction that the basic rules of complex projects do not apply to you. The first is valuable. The second is dangerous. Learning to see the difference is one of the most important things a leader can do.