A senior manager faces a difficult call: three candidates, two project directions, five stakeholders with conflicting priorities. She has more data than any manager in history. Dashboards. AI recommendations. Consultant decks. And yet the decision does not feel easier. If anything, it feels harder.

This is the paradox at the heart of modern decision-making: more information does not automatically produce better decisions. In some conditions it actively makes them worse. The field that takes this paradox seriously and tries to do something about it is decision science.

Decision science is the interdisciplinary field that studies how people make decisions and how those decisions can be improved. It draws on psychology, economics, statistics, neuroscience and computer science. It is broader than behavioural economics: it covers normative models (how people should decide), descriptive analyses (how they actually decide) and prescriptive approaches (how decisions can be improved in practice). Key figures: Herbert Simon, Gary Klein, Gerd Gigerenzer. See also: What is behavioural design?

What is decision science?

Decision science does not have a single owner. It is a crossroads of disciplines, each bringing its own lens to the same fundamental question: how do people make choices, and how can those choices be improved?

Classical economics offered the normative framework: this is how rational actors should decide, given complete information and unlimited cognitive capacity. Statistics and probability theory provided the mathematical instruments: here is how to calculate expected values and weigh risks. Psychology documented how people actually decide. Neuroscience began revealing what happens in the brain during decisions. And computer science added algorithmic decision-making both as an object of study and as a practical tool for supporting human judgement.

What distinguishes decision science from its contributing disciplines is the ambition to integrate these perspectives. Not just: how do people decide psychologically? But: how do people decide, what are the consequences of those decisions, when are they good enough, and how can we systematically improve them?

That last question is the one I find most useful in practice. When I work with organisations on behavioural design challenges, the question is rarely “how do people decide in theory?” It is always “how can we design environments and processes that produce better decisions?” That is a decision science question, not just a behavioural economics one.

Decision science asks not only how people decide. It asks when a decision is good enough and how the environment can be designed so that better decisions are the natural outcome.

Three questions: normative, descriptive, prescriptive

The richness of decision science comes from holding three complementary questions in tension.

The normative question: how would a perfectly rational decision-maker decide? This is the domain of classical decision theory: probability calculus, expected utility maximisation, game theory. It provides the reference point against which real decisions are measured. But it is also a fiction. No human being decides that way, and no human being ever will.

The descriptive question: how do people actually decide? This is the domain of cognitive psychology and behavioural economics. It maps the heuristics, biases and systematic departures from the normative model that characterise real human judgement. Kahneman and Tversky were pre-eminently descriptive scientists: they documented with precision how human decision-making departs from rational models, giving us the catalogue of cognitive biases that became the foundation of behavioural economics.

The prescriptive question: how can decisions be improved in practice? This is the most applied part of decision science and the question most directly relevant to behavioural design. It is not about building a perfectly rational decision-maker. It is about designing processes, environments and tools that help real human decision-makers reach better outcomes. This is where decision science connects most directly to what we do at SUE.

Herbert Simon and the discovery of bounded rationality

The single most important contribution to decision science came from a man who rarely gets mentioned in the same breath as Kahneman or Thaler: Herbert Simon.[1]

Simon was a genuine polymath. He did foundational work in cognitive psychology, artificial intelligence, economics and organisational science. In 1978 he received the Nobel Prize in Economics for his research on decision-making processes within organisations. He remains one of the only people ever to win both the Nobel Prize in Economics and the Turing Award in computer science.

His central contribution was the concept of bounded rationality. The classical economic assumption of unbounded rationality is, Simon argued, factually wrong. People are not irrational. But they are boundedly rational: constrained by the cognitive capacity of their brains, the information available to them, the time they have, and the complexity of the decision environment.

Faced with these constraints, people do not optimise. They satisfice. Simon coined the term by combining ‘satisfy’ and ‘suffice’: rather than searching for the best option, people search for the first option that meets an acceptable threshold and stop there. This is not a defect of the human mind. It is an adaptive strategy for navigating complex, uncertain worlds with limited cognitive resources. Given real-world time pressures and information overload, satisficing often produces better outcomes than exhaustive optimisation would.

Simon’s insight opened a fundamentally different way of thinking about organisational decision-making. Not: why do organisations fail to maximise? But: how do organisations actually make decisions given cognitive constraints, conflicting interests, and incomplete information? And what follows from that for how we design organisations, processes and systems?

Those questions are as live today as they were in 1955. Every organisation I work with is, in effect, a decision-making machine trying to make better decisions than its cognitive constraints naturally allow. Simon gave us the vocabulary to see that clearly.

Gary Klein and how experts really decide

The Kahneman and Tversky research programme documented human irrationality with striking clarity. But it had a limitation: it was built almost entirely on laboratory experiments with students facing artificial problems under controlled conditions. What happens when you study how skilled professionals make decisions in the field, under real time pressure, with real consequences?

That was the question Gary Klein set out to answer, and his findings were surprising.[2]

Klein developed what he called Naturalistic Decision Making (NDM): a research programme that studied firefighters, intensive care nurses, military commanders, chess grandmasters and other experienced practitioners in their actual working environments. His most important book, Sources of Power (1998), documented what he found.

Experts, it turned out, do not make decisions the way textbooks say they should. They do not generate a set of options, evaluate each against criteria, and select the best. They do not deliberate through logical steps. They barely compare options at all.

What they do instead is this: they look at a situation and they recognise it. Experience has given them a rich library of patterns, and when they encounter a new situation, they unconsciously match it to a pattern from that library. That pattern comes with a ready-made course of action attached. They then run a quick mental simulation of that action: will it work? If yes, they act. If the mental simulation surfaces a problem, they modify the plan or look for a different pattern. The whole process takes seconds.

Klein called this Recognition-Primed Decision (RPD) making. It is not random intuition. It is expertise made fast. The firefighter who senses that something is wrong about a burning building and orders her crew out before the floor collapses is not guessing. She is reading dozens of subtle signals simultaneously and matching them to a pattern that her twenty years of experience have made available to her.

The implication for decision science and behavioural design is significant. Novices need structured analytical processes because they lack the pattern libraries that experts have. Experts need something different: environments that protect their pattern-recognition from distortion, tools that surface information that might not fit their current pattern, and structures that create space for the mental simulation to surface potential failures before they act.

The same intervention that helps a novice decide better can actually degrade an expert’s performance by breaking the flow of pattern recognition. Designing good decision environments requires knowing who is deciding and how they are deciding.

Gerd Gigerenzer and the rehabilitation of heuristics

In the psychology of decision-making, the dominant narrative for three decades was that heuristics are errors: mental shortcuts that systematically produce bad decisions. This was the legacy of Kahneman and Tversky’s research programme.

Gerd Gigerenzer challenged that narrative head-on.[3] His research showed that simple heuristics outperform complex statistical models in many real-world situations. His concept of fast and frugal heuristics describes decision rules that are quick to apply and economical in the information they require, yet robust in their performance under uncertainty.

A classic demonstration: the gaze heuristic. When an outfielder in baseball runs to catch a fly ball, she does not calculate the trajectory, velocity and wind resistance of the ball. She fixes her gaze on the ball and runs in a direction that keeps the angle of gaze constant. This simple rule produces a near-optimal path without any of the physics. It is fast, it requires almost no information, and it works.

Another: the recognition heuristic. Given two options, one recognised and one not, choose the recognised one. In domains where recognition correlates with the relevant property (city population, stock performance, brand quality), this simple rule outperforms complex multi-attribute models. Not in spite of its simplicity, but because of it: complex models are sensitive to noise in the data; the recognition heuristic is not.

Gigerenzer’s deeper point is ecological rationality. A heuristic is not good or bad in the abstract. It is good or bad relative to the environment it operates in. The human cognitive toolkit evolved in specific environments, and many of its heuristics are well-adapted to those environments. The question is not “is this heuristic biased compared to the normative model?” but “does this heuristic produce good outcomes in the environment where it is used?”

More information and more complex analysis are not always better. In uncertain, complex environments, simpler rules are often more robust than exhaustive analyses. This has direct implications for how organisations structure decision processes, and for when adding more data to a decision is likely to improve it versus when it is likely to make it worse.

How experts decide vs how novices decide

One of the clearest practical insights from decision science is the systematic difference between how experts and novices make decisions. Understanding this difference is directly useful for designing better decision environments.

Novices work analytically. They break problems into components, generate options, evaluate each option against criteria, and select. This is slow and cognitively demanding, but it compensates for the lack of pattern libraries. It also makes the decision process visible and correctable. You can review a novice’s reasoning and identify where it went wrong.

Experts work through pattern recognition. They see a situation as an instance of a type they have encountered before, and they retrieve a ready-made response. This is fast and often highly effective, but it is largely invisible. Experts frequently cannot explain how they arrived at a judgement. The knowledge is tacit, embedded in years of accumulated experience, and not easily articulated.

This creates two different design challenges.

For novices, the challenge is scaffolding. Good decision design provides structure, breaks complex decisions into manageable steps, makes criteria explicit, and reduces the cognitive load of evaluation. Checklists, decision trees and structured protocols are tools for novice decision-makers. They work because they compensate for what novices lack: experience-based pattern recognition.

For experts, the challenge is different. The risk is not that they lack structure but that they over-apply familiar patterns to situations that are genuinely novel. The most dangerous decision-makers are not novices who know they do not know. They are experienced practitioners who have encountered something new but read it as familiar. Klein’s research on firefighter accidents found that most fatal errors occurred when experienced firefighters misclassified a novel situation as a routine one and applied the wrong pattern.

Designing for expert decision-makers means building in anomaly detection: structures that surface signals inconsistent with the current pattern recognition, creating space for the expert to update their mental model before acting. Pre-mortems, red teams and structured devil’s advocacy are all tools for doing this.

Decision science and the SUE Influence Framework

At SUE, we situate decision science within a broader analytical structure: the SUE Influence Framework. The framework maps the four forces that determine whether someone changes their behaviour: Pains (what they want to move away from), Gains (what they want to move towards), Comforts (what keeps them doing what they already do), and Anxieties (what stops them from trying something new).

If you apply this framework to the question of why bad decision processes persist in organisations, you get a very clear picture.

Pains: the costs of bad decisions are real but often delayed and diffuse. A poor decision made today has consequences that may not be visible for months, and by then the causal chain is hard to reconstruct. This weakens the motivation to improve decision processes significantly.

Gains: the benefits of systematically better decision-making are substantial but abstract. “Better decisions” is difficult to quantify in advance. It does not make the quarterly earnings call.

Comforts: the current decision process is habitual, socially embedded and cognitively comfortable. People have invested heavily in their own decision-making expertise and are reluctant to have it scrutinised. The informal process is also faster, at least in the short term.

Anxieties: an explicit decision process makes decisions visible and evaluable, which increases accountability. This is a genuine barrier for managers accustomed to making calls on the basis of authority and intuition. Making the process transparent also surfaces disagreement that the current informal process allows to remain hidden.

The SUE Influence Framework applied to decision science: Pains, Gains, Comforts and Anxieties that drive and block better decision processes
The SUE Influence Framework™: understanding the four forces that drive and block better decision-making in organisations.

This analysis tells you something important: the barrier to better organisational decision-making is rarely a lack of knowledge about what good decisions look like. It is the Comforts and Anxieties that keep people attached to the existing process. Any intervention that addresses only the Gains side will underperform. You need to reduce the Anxieties first.

The connection to behavioural design

In The Art of Designing Behaviour (2024), I argue that behavioural design is fundamentally in the business of improving decisions. Not by making people smarter or more rational. By designing environments, processes and interactions that make better decisions the natural outcome of ordinary human cognition.

Decision science is the theoretical foundation for that practice. It tells us three things that are directly actionable.

First, the quality of a decision depends as much on the decision environment as on the decision-maker. A good process in a bad environment produces bad decisions. This is Simon’s contribution: organisations are decision environments, and they can be designed to produce better outcomes.

Second, the right decision process depends on who is deciding and what they already know. Klein’s research tells us that tools designed for novices can harm experts, and vice versa. Designing decision support without knowing your decision-maker is like designing a product without knowing your user.

Third, more analysis is not always better. Gigerenzer’s research tells us that simple rules outperform complex models in many conditions. The instinct to add more data, more steps and more rigour to a decision process is not always correct. Sometimes the right design is to remove information, simplify the decision frame and trust the heuristic.

These three principles from decision science map directly onto the work of behavioural design: understand the decision-maker, design the environment, and choose the right level of analytical complexity for the situation.

Frequently asked questions about decision science

What is decision science?

Decision science is the interdisciplinary field that studies how people make decisions and how those decisions can be improved. It draws on psychology, economics, statistics, neuroscience and computer science. It is broader than behavioural economics because it also encompasses normative models of optimal choice and prescriptive approaches to improving real-world decisions.

What is the difference between decision science and behavioural economics?

Behavioural economics is a subfield of decision science with a specific focus on economic decisions and the psychological factors that influence them. Decision science is broader: it covers decisions outside economic contexts, mathematical and statistical decision models, algorithmic decision-making, and the normative question of how people ought to decide. Behavioural economics is largely descriptive. Decision science also asks the prescriptive question: now that we know how people decide, what do we do about it?

Who is Herbert Simon and why does he matter?

Herbert Simon was an American polymath who won the Nobel Prize in Economics in 1978 for his research on decision-making in organisations. He introduced the concept of bounded rationality: people are not irrational but are constrained by cognitive capacity, available information and time. Rather than choosing the best option, people choose the first option that is good enough. Simon called this satisficing. His work fundamentally changed how we think about organisations, economics and human cognition.

What is naturalistic decision making?

Naturalistic decision making (NDM) is the research programme developed by Gary Klein that studies how experienced practitioners make decisions in real-world conditions: under time pressure, with incomplete information and high stakes. Klein found that experts rarely compare options analytically. Instead, they recognise patterns from experience and mentally simulate a course of action before committing to it. His Recognition-Primed Decision (RPD) model describes this process. NDM has practical implications for how decision support should be designed for different levels of expertise.

What are fast and frugal heuristics?

Gerd Gigerenzer argues that simple mental rules outperform complex statistical models in many real-world situations. Fast and frugal heuristics are decision rules that are quick to apply and require little information, yet perform robustly under uncertainty. Rather than being errors to be corrected, Gigerenzer sees them as ecologically rational strategies: well-adapted to the specific decision environments humans typically face. The lesson is not to eliminate heuristics but to understand which heuristics work well in which environments.

Conclusion

Decision science is the widest lens through which to view human decision-making. It integrates the precision of mathematics, the empirical richness of psychology, the practical orientation of organisational science, and the technological possibilities of AI. And it asks the question that matters most to every organisation: how can we make systematically better decisions?

The three figures covered in this article represent three different but complementary answers. Simon tells us that bounded rationality is not a bug to be fixed but a constraint to be designed around. Klein tells us that expertise is real and consequential, and that the best decision support differs radically depending on the experience level of the decision-maker. Gigerenzer tells us that simplicity is underrated, and that the instinct to add complexity to a decision process is often wrong.

Together, they give decision science its practical power. And that power is directly available to behavioural designers who know how to use it.

Want to learn how to translate these insights into concrete interventions? The Behavioural Design Fundamentals Course teaches you the SUE Influence Framework and the tools to systematically analyse and influence behaviour and decision-making. Rated 9.7 out of 10 by more than 10,000 professionals from 45 countries.

PS

At SUE, our mission is to use the superpower of behavioural science to help people make better choices, for themselves and for the organisations and societies they are part of. Decision science teaches us that better decisions are not primarily a question of intelligence or effort. They are a question of environment, process and tools. That is exactly what behavioural design offers: a systematic method for designing those environments. Herbert Simon knew this in 1955. We are still learning what it means in practice.