We make an estimated 35,000 decisions every day. From the trivial (what shall I eat?) to the consequential (which strategy should we pursue?). Yet most of those decision making processes do not unfold consciously and deliberately - they happen fast, automatically and largely below the threshold of awareness. That is the central insight behavioural science offers us: we believe we are rational decision makers, but we are first and foremost creatures of habit, guided by a brain that is built to conserve energy.
The implications are significant - both at the individual level and within organisations. Because if the majority of our decisions are driven by mental shortcuts and automatic patterns, the probability of systematic error is substantial. And these errors are predictable. Which means they are correctable.
In this article I explain how decision making really works, which cognitive biases cause the most damage, what this means for decision making in organisations, and which practical tools you can deploy to make better decisions.
What is decision making?
Decision making is the cognitive process by which we choose between two or more alternatives. In the classical, rational view - dominant in economics and management science for decades - this process works as follows: you define the problem, gather all relevant information, weigh the options carefully and choose the best one.
The problem? Almost nobody actually works this way.
The groundbreaking insights of Nobel laureate Daniel Kahneman, distilled in his book Thinking, Fast and Slow (2011), show that our brain uses two fundamentally different systems for making decisions.
System 1 and System 2: the two engines of decision making
System 1 and System 2 is Kahneman's name for two cognitive modes that underlie all our thinking:
- System 1 is fast, automatic, unconscious and emotionally driven. It decides in milliseconds, without deliberate effort. It recognises patterns, responds to intuition and relies on heuristics - mental shortcuts that conserve cognitive energy.
- System 2 is slow, conscious, analytical and rational. It is capable of deep analysis and careful deliberation, but it is also energy-intensive and slow to engage.
The crucial insight: approximately 95% of our decisions are made by System 1. We tell ourselves a story afterwards about how we thought it through carefully, but the decision was already made before System 2 was ever consulted.
We think we make decisions based on facts. But facts are always filtered through a brain that wants to save energy.
This has profound consequences for how we approach decision making - in our personal lives and in organisations. Because System 1 is efficient, but also systematically error-prone. It uses shortcuts that work well enough in most situations, but lead to serious mistakes in complex, high-stakes contexts.
The good news: these mistakes are predictable. Kahneman and his colleague Amos Tversky spent decades mapping the systematic errors we make - and their findings are now among the most cited insights in modern psychology and decision science.
Why we make bad decisions: the 5 most damaging cognitive biases
Cognitive biases are systematic errors in our thinking - predictable deviations from rational reasoning that arise from the shortcuts System 1 relies on. They are not random. They are patterned. That makes them recognisable and, in principle, manageable.
These are the five most damaging cognitive biases in decision making:
1. The availability heuristic
We judge the probability of events based on how easily we can recall examples. If something comes quickly to mind, we estimate it as more likely - regardless of actual base rates.
A classic example: after reading about a plane crash, people dramatically overestimate the risk of flying, even though driving is objectively far more dangerous. The plane crash is more vivid and therefore more "available" in memory. In organisations this means: when a colleague recently witnessed a project fail for a specific reason, they will weight that risk far more heavily in their decision making than statistics would justify.
2. The anchoring effect
The first piece of information we receive about a choice - the "anchor" - sets the reference frame for all subsequent information. We adjust our estimate relative to that anchor, but the adjustment is systematically insufficient.
The anchoring effect is one of the most powerful biases in negotiation and pricing strategy. An opening price of $100,000 makes an offer of $80,000 look attractive, even if the true value is $60,000. Whoever sets the anchor controls the outcome of the negotiation - and of the decision making process that follows.
3. Confirmation bias
We actively seek information that confirms our existing beliefs and discount or ignore information that contradicts them. Confirmation bias is particularly dangerous in strategic decision making because it distorts our picture of reality - precisely when an accurate picture is most needed.
In organisations, confirmation bias shows up as selectively gathering data to support a predetermined business case, ignoring negative signals from the market, and surrounding ourselves with people who agree with us. The consequence: we amplify our own blind spots rather than correcting them.
4. The sunk cost fallacy
We allow decisions to be influenced by costs we have already incurred and cannot recover - "sunk costs". Rationally, sunk costs are irrelevant to future decisions: what is gone is gone. Yet they powerfully shape our behaviour.
We see this pattern everywhere: a project that has been running for three years and cost millions continues even when it is clear it will not succeed. "We've already invested so much in it" is the reasoning. The sunk cost fallacy costs organisations enormous sums annually in wasted resources - and in missed opportunities that are not pursued because organisations are locked into failing initiatives.
5. Optimism bias
We systematically overestimate the probability of positive outcomes and underestimate the probability of negative ones. Research shows that people on average believe they are less likely than others to experience divorce, illness or business failure - and more likely to succeed.
In project planning this manifests as the "planning fallacy": projects almost always take longer and cost more than initially estimated, because we are systematically too optimistic about what will go well and insufficiently attentive to what might go wrong.
Decision making in organisations: three additional complications
Individual decision making is already complex enough. In organisations, three additional complications make the process even more vulnerable to error.
Decision fatigue at work
Decision fatigue occurs when we have to make too many decisions in succession. The quality of our choices declines as we make more decisions - not because we become less intelligent, but because our cognitive reserves become depleted.
A well-known study of judicial rulings showed that judges made significantly more favourable decisions early in the morning and immediately after a break than late in the afternoon. The timing of the decision - something entirely irrelevant to the content - was influencing the outcome. For organisations the practical implication is direct: schedule the most consequential decisions at the beginning of the day or the meeting, not at the end.
Group dynamics and groupthink
Groups do not automatically make better decisions than individuals. Groupthink - the phenomenon whereby cohesive groups converge towards consensus and suppress dissenting views - can have catastrophic consequences for decision quality.
The 1986 Space Shuttle Challenger disaster is a textbook example: engineering objections were overridden under group pressure. Authority bias - the tendency to defer to the opinion of the most senior person in the room - amplifies this effect in hierarchical organisations. The practical implication: structure decision making processes so that dissenting views can be expressed safely, before the group has converged.
Time pressure and decision quality
Under time pressure we switch more rapidly to System 1. In acute situations this can be helpful. In complex strategic decisions it leads to oversimplification. Organisations that operate under chronic time pressure make chronically worse decisions.
A practical countermeasure: explicitly reserve time for "slow thinking" on important decisions. Build deliberate delays into the process. Ask yourself: "What information am I ignoring because I am in a hurry?"
The SUE | Influence Framework as a decision-making lens
One of the most effective ways to improve decision making is to analyse the unconscious forces surrounding a specific decision. That is precisely what the SUE | Influence Framework is designed to do.
The framework allows you to systematically map the forces at play in any decision - both for the people you want to influence and within your own decision making process. It always starts from the question: what keeps people locked in current behaviour, and what moves them towards new behaviour?
The framework distinguishes four forces, each addressing a different dimension of the decision making process:
- Pains: frustrations and pain points that motivate someone to change. In decision making: what problems make the current situation intolerable enough to prompt action?
- Gains: the positive outcomes of a decision. What value - functional, emotional or social - does the new behaviour or choice deliver?
- Comforts: habits and routines that keep people anchored in current behaviour. Which automatisms stand in the way of better decisions?
- Anxieties: fears, doubts and prejudices that block new decisions. What is stopping people from taking the step?
This framework helps you not only to influence others, but also to calibrate your own decisions. Before a major decision, ask yourself: what pains are driving me? What anxieties are holding me back? And which comforts - habits, routines, "the way we do things here" - are making me blind to better alternatives?
From a behaviour change perspective, it is critical to understand that decision making never occurs in a vacuum. Every decision is surrounded by a field of forces - motivations, habits, fears and expectations - and these are almost always unconscious.
Practical tools for better decision making
Now that we understand what goes wrong in decision making, the question is: what works? Here are five evidence-based tools you can apply immediately.
1. Pre-mortem analysis
The pre-mortem is a technique developed by cognitive psychologist Gary Klein. The method is simple but powerful: imagine your decision has been made and the project - one year later - has failed completely. Now write down why it failed.
By reasoning backwards from a failed outcome, you activate System 2 at a moment when optimism bias would otherwise dominate. You surface risks you would otherwise have ignored. The pre-mortem is particularly effective as an antidote to groupthink: everyone writes individually, so dissenting views emerge before the group has converged on a comfortable consensus.
2. Choice architecture
Choice architecture is the deliberate design of the environment in which decisions are made, so that better decisions become easier. It is not about coercion, but about intelligently arranging options.
The most powerful application is setting smart defaults - the standard choice that applies when someone does nothing. Research consistently shows that defaults are extremely powerful: most people choose the default, however it is set. Want employees to save more for retirement? Set automatic enrolment as the default. Want healthier choices in the canteen? Place healthy options at eye level and make them the standard.
3. The checklist method
In his book The Checklist Manifesto (2009), surgeon Atul Gawande demonstrates how checklists can dramatically reduce the number of errors in complex decision processes. Even experienced professionals make predictable errors when relying on memory and intuition alone.
A simple decision checklist for organisations might ask: What information are we still missing? Who is affected by this decision but has not yet been consulted? What assumptions underlie our reasoning? And: what evidence would change our minds?
4. Deliberate cooling-off period
For high-stakes decisions, it pays to deliberately build in a cooling-off period - a gap between initial analysis and the final decision. This gives System 2 the opportunity to review and, where necessary, correct the intuitive System 1 response.
The rule of "sleeping on it" is not a cliche: research confirms that sleep aids complex decision making because the brain consolidates information and recognises patterns that remain hidden under time pressure.
5. The outside view
One of Kahneman's most powerful recommendations for better decision making is actively seeking the "outside view": rather than reasoning from your specific situation (inside view), you look at base rates for comparable situations.
How many comparable projects were delivered on time and within budget? How many mergers of this type genuinely created value? The outside view corrects the optimism bias that inevitably colours our reasoning when we treat our own situation as uniquely special - which everyone tends to do.
Conclusion: decision making can be improved
Decision making is not a rational process. It is a human process - and human means fallible, limited and shaped by context and unconscious forces. That is not discouraging. It is the starting point for improvement.
Behavioural science has taught us that we can design environments that elicit better decisions. That we can recognise and compensate for biases. That we can structure decision processes to make them less error-prone. And that, by understanding how System 1 and System 2 interact, we can consciously choose which system to engage when.
The key is not to try to switch off System 1 - that is both impossible and undesirable. System 1 makes us efficient across the thousands of small decisions we make every day. The key is to understand when System 1 is unreliable, and then deliberately engage System 2. Or better still: to design the environment so that System 1 automatically defaults to the better choice.
Want to apply this in your organisation or role? The Behavioural Design Fundamentals course gives you the complete toolkit of behavioural science - from System 1 and System 2 to designing choice architecture that elicits better decisions. We also offer the training for teams, for organisations looking to systematically improve how decisions are made across the board.
Frequently asked questions about decision making
What is decision making in psychology?
In psychology, decision making is the cognitive process by which we select a course of action from multiple alternatives. The classical rational model assumes we weigh all information and choose the best option. Behavioural science - led by Kahneman and Tversky - shows that approximately 95% of our decisions are driven by System 1: fast, automatic and unconscious thinking. This makes us susceptible to predictable cognitive biases that can be understood, recognised and mitigated through deliberate design.
How do you improve decision making in an organisation?
Improving decision making in organisations requires a combination of awareness and structural measures. Effective interventions include: scheduling important decisions early in the day to avoid decision fatigue, conducting pre-mortem analyses before major decisions, setting smart defaults through choice architecture, using checklists for complex processes, and actively inviting dissenting views to prevent groupthink. The SUE Influence Framework provides a structured method for mapping the unconscious forces that shape decisions in any specific context.
What are the most common decision making mistakes?
The five most damaging cognitive biases in decision making are: the availability heuristic (what is easily recalled seems more likely), the anchoring effect (the first piece of information sets the frame for all subsequent judgements), confirmation bias (we seek information that confirms existing beliefs), the sunk cost fallacy (we keep investing because we have already invested), and optimism bias (we systematically overestimate positive outcomes and underestimate risks).