Picture this. A senior consultant walks into the boardroom. She has 40 slides, a confident voice, and a strategy built on “best practices from leading organisations”. Nobody in the room asks which organisations. Nobody asks how “best” was measured. Nobody asks whether what worked for a Silicon Valley tech giant has any relevance for a mid-sized Dutch logistics company. The proposal gets approved. Three months later, nobody remembers who recommended it. Six months later, the project is quietly shelved.
You have witnessed this scene. We all have. Not because the people in the room were stupid. But because the bullshit was packaged in a way that made it feel like insight. And that is a behavioural design problem.
Critical thinking is the disciplined ability to evaluate claims, arguments and evidence by recognising the cognitive biases and social pressures that distort human judgement. In behavioural design, critical thinking means understanding how System 1 shortcuts make bullshit persuasive — and designing decision processes that compensate for these mental shortcuts. It is not about being the smartest person in the room. It is about being the hardest one to fool.
What makes bullshit so persuasive?
The philosopher Harry Frankfurt made a useful distinction between lying and bullshitting. A liar knows the truth and deliberately says the opposite. A bullshitter does not care about the truth at all. The goal is not to deceive you about a specific fact. The goal is to impress you, to sound authoritative, to get you to nod without thinking too hard. And that is precisely why bullshit is more dangerous than lies. Lies can be fact-checked. Bullshit operates beneath the threshold where people bother to check.
Bullshit is persuasive because it exploits the same cognitive biases that helped our ancestors survive. Your brain is running on System 1 most of the time: the fast, automatic, energy-saving mode that processes information through shortcuts rather than careful analysis. These shortcuts — heuristics — are efficient, but they are also hackable.
Authority bias makes you trust the person with the impressive title, the expensive suit, or the stage at the conference. Your brain reasons: if they got that position, they must know what they are talking about. This shortcut was useful in small tribal communities. In modern organisations, it means the loudest voice in the room often wins, regardless of evidence.
Social proof tells you that if everyone else in the meeting seems to agree, the proposal must be sound. You scan faces. Nobody looks uncomfortable. So you stay quiet too. This is how entire organisations march towards decisions that nobody privately believed in. It is also how the 2008 financial crisis unfolded: layer upon layer of professionals who assumed someone else had done the checking.
The confidence heuristic is the most dangerous of all. We systematically confuse confidence for competence. When someone delivers their opinion with certainty, your System 1 interprets that certainty as evidence. The neuroscience is clear: confident delivery activates reward centres in the listener’s brain. It literally feels good to be told something with conviction. And that feeling overrides your analytical capacity.
Bullshit does not succeed because people are stupid. It succeeds because our brains are wired to reward confident simplicity over careful complexity.
Seven rules to detect bullshit
These rules are not about becoming a cynical sceptic who torpedoes every meeting. They are about building the mental habits that protect your decision-making from the most common forms of professional bullshit.
1. Do not mistake outcomes for good judgement
A company launches a campaign, and sales go up. The marketing team takes credit. But sales might have gone up anyway — because of seasonality, a competitor’s stumble, or a dozen other factors nobody measured. Outcome bias is the tendency to evaluate the quality of a decision based on its result rather than the quality of the reasoning that led to it.
The only way to confidently attribute an outcome to a specific action is through a randomised controlled experiment. When Uber tested this by turning off two-thirds of their digital advertising spend, they discovered it made virtually no difference to their install numbers. They had been spending over a hundred million dollars a year on advertising that did nothing. They only discovered this because they had the discipline to run a proper test rather than trusting the story their dashboards were telling them.
Rule of thumb: when someone claims their action caused a result, ask what would have happened without that action. If they cannot answer with evidence, they are telling you a story, not sharing a finding.
2. Never confuse reasonable with rational
The most effective bullshit does not sound outrageous. It sounds reasonable. It follows a logical structure. It uses words like “therefore” and “clearly” and “the data suggests”. But sounding reasonable is a rhetorical skill, not an analytical one.
Your System 1 processes the flow of an argument rather than its substance. If the speaker sounds fluent and the reasoning feels smooth, your brain files it as “probably true” and moves on. This is why con artists rarely stumble over their words and why the most dangerous consultants are the most articulate ones.
The solution: always seek a second, independent opinion on important decisions. And critically — do not share the first opinion with the second source. The moment you say “a colleague suggested X, what do you think?” you have anchored their judgement. Give them the raw problem and let them reach their own conclusion.
3. Attack vagueness relentlessly
Vagueness is the oxygen of bullshit. When a strategist says “this initiative will significantly improve customer engagement”, they have said nothing. What does “significantly” mean? Improve by how much? Measured how? Over what timeframe?
Vague predictions are unfalsifiable. If they do not come true, the predictor can always say “just wait” or “it would have been worse without it”. Philip Tetlock’s research on expert forecasting found that the most famous pundits were less accurate than simple statistical models — precisely because they traded in sweeping, vague proclamations rather than specific, testable predictions.
When someone makes a claim, push for precision. What specifically will happen? By when? How will we know? If they cannot answer these questions, they are not making a prediction. They are performing confidence.
4. Always suspect confirmation bias
The business model of many consultancy firms is to provide elaborate justifications for decisions that have already been made. Benjamin Franklin called this Franklin’s Gambit: the process of finding reasons for what you have already decided to do. Kahneman would call it confirmation bias: the tendency to search only for evidence that supports your existing beliefs.
This is not dishonesty. It is how the brain works. Once you have formed a hypothesis, your System 1 automatically filters incoming information to confirm that hypothesis. Contradictory evidence gets downgraded. Supporting evidence gets amplified. You do not notice it happening.
The antidote is structural. Some investment firms use red team exercises: they assign a team whose explicit job is to destroy the case for a proposed deal. The red team gets rewarded for finding fatal flaws, not for being supportive. This creates a forcing function that counteracts the natural tendency to fall in love with your own ideas.
5. Check for skin in the game
When someone recommends a course of action, ask yourself: what happens to them if this goes wrong? A consultant who gets paid regardless of whether the strategy works has no skin in the game. An advisor who earns a commission on the product they recommend has misaligned incentives. A pundit who makes bold predictions on television suffers no consequences when those predictions are wrong.
Skin in the game is the single most reliable indicator of whether someone is genuinely confident in their recommendation or just performing confidence for personal benefit. If they can win big but lose nothing, their judgement is structurally compromised — even if they are not aware of it.
The practical test: would this person bet their own money on their recommendation? Would they accept a penalty clause if it fails? If not, treat their confidence as a signal of incentives, not of insight.
6. Expect Goodhart’s Law everywhere
Goodhart’s Law states: when a measure becomes a target, it ceases to be a good measure. The moment you tie a bonus to a KPI, people will optimise for the KPI rather than the outcome the KPI was supposed to measure.
Consider the Net Promoter Score. Companies use it as a proxy for customer satisfaction. But once NPS becomes a target, teams start gaming it: surveying only happy customers, timing the question at peak satisfaction moments, discouraging unhappy customers from responding. The number goes up. The customer experience stays the same or gets worse. But the dashboard looks great.
This pattern is everywhere. Police officers given arrest targets start harassing people for minor offences. Universities ranked by research output reduce investment in teaching. Hospitals measured on waiting times reclassify corridors as “treatment rooms”. Whenever you encounter a metric being used to justify a decision, ask: who controls this metric, and what incentive do they have to make it look good?
7. Account for status anxiety in every decision
Most bad decisions in organisations are not caused by stupidity or malice. They are caused by status anxiety: the fear of being blamed, looking foolish, or violating an unspoken norm. Rory Sutherland captured this perfectly: it is much easier to be fired for being illogical than for being unimaginative.
This means that in any meeting, the safest option always has an unfair advantage. Not because it is the best option, but because nobody gets blamed if the safe option fails. “We went with the market leader” is an acceptable excuse. “We tried something different” is not.
When you evaluate a recommendation, consider the forces shaping the recommender’s behaviour. Use the Influence Framework to map their pains, gains, habits and anxieties. What are they accountable for? Whom do they need to impress? What is the personal downside of being wrong? Only when you understand the social web around the decision-maker can you assess whether their recommendation reflects genuine analysis or self-preservation.
Why smart people fall for bullshit
There is a comforting assumption that intelligence protects you from bullshit. It does not. In many cases, intelligence makes you more vulnerable, not less.
The Dunning-Kruger effect shows that the people who know the least about a subject are the most confident in their opinions about it. They lack the knowledge required to recognise the limits of their knowledge. Meanwhile, genuine experts tend to express more uncertainty, more caveats, more “it depends”. In a world that rewards confidence, the person who knows least speaks loudest.
But the Dunning-Kruger effect is only half the story. Smart people have a different vulnerability: they are exceptionally good at constructing post-hoc rationalisations for beliefs they already hold. A brilliant mind is brilliant at finding evidence for whatever it has already decided. This is sometimes called the intelligence trap: the smarter you are, the more sophisticated your self-deception becomes.
Research in behavioural science has consistently shown that cognitive ability does not predict resistance to cognitive biases. High-IQ individuals are just as susceptible to confirmation bias, anchoring effects, and motivated reasoning as everyone else. Sometimes more so, because their verbal fluency allows them to construct more convincing arguments for flawed positions.
The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt. — Bertrand Russell
This is why critical thinking is not a personality trait. It is a practice — a set of deliberate habits and structures that compensate for the built-in flaws of human cognition. You cannot think your way out of cognitive biases. You have to design your way out.
Critical thinking at work: meetings, pitches, and strategy
Knowing the rules is one thing. Applying them in the heat of an actual meeting, pitch, or strategy session is another. Here is what critical thinking looks like in practice.
In meetings, bullshit thrives in the gap between what is said and what is tested. The antidote is simple questions. When a colleague says “research shows that...”, ask which research. When a vendor says “companies like yours typically see...”, ask for specifics. Which companies? What was the comparison? What was the timeframe? These questions are not aggressive. They are the minimum standard of evidence that any professional decision deserves.
In pitches, watch for the confidence heuristic at work. The slicker the presentation, the less rigorous the thinking often is. Genuinely strong proposals can afford to be understated because they rely on evidence, not performance. When someone is working hard to impress you, ask yourself what they are compensating for.
In strategy, demand falsifiable claims. A strategy that cannot fail is not a strategy — it is a wish list. Ask: what would have to be true for this strategy to work? What could we observe in six months that would tell us it is not working? If the strategist cannot articulate the conditions for failure, they have not done the thinking. They have done the storytelling.
In hiring, beware of narrative fallacy. Candidates who tell a compelling story about their career arc are not necessarily better performers. They are better storytellers. Structured interviews with standardised questions are demonstrably better predictors of job performance than unstructured conversations — precisely because they reduce the space for bullshit to operate.
How to build a bullshit-resistant culture
Individual critical thinking is necessary but insufficient. If your organisation punishes people for challenging the status quo, no amount of personal scepticism will make a difference. Building a bullshit-resistant culture requires structural interventions.
Normalise dissent. The most dangerous sentence in any organisation is “we all agree”. If everyone agrees immediately, it means either the proposal is genuinely brilliant or nobody feels safe enough to disagree. The second scenario is far more common. Appoint a devil’s advocate for every major decision. Not informally, but as an explicit role with explicit permission to challenge.
Conduct pre-mortems. Before launching a project, run a pre-mortem: imagine that the project has failed spectacularly. Now work backwards. What went wrong? This technique, developed by psychologist Gary Klein, exploits the same imaginative capacity that makes hindsight bias so powerful — but applies it before the decision rather than after. It gives people permission to voice doubts they would otherwise suppress.
Reward the update, not the prediction. In most organisations, changing your mind is seen as weakness. But in reality, updating your beliefs in response to new evidence is the single most important indicator of good judgement. Philip Tetlock’s research on superforecasters found that the best predictors were not the ones who made the boldest initial predictions. They were the ones who updated most frequently and most accurately when new information arrived.
Demand evidence, not anecdotes. One vivid success story does not validate a strategy. One angry customer does not invalidate a product. The plural of anecdote is not data. Create a norm where claims need to be supported by systematic evidence, not cherry-picked examples. And make this expectation explicit — not as an attack on the person presenting, but as a standard of quality for the decisions you make together.
Watch for perverse incentives. Every KPI, every bonus structure, every reporting metric creates incentives for bullshit. If your sales team is measured on revenue and not on customer retention, they will over-promise to close deals. If your marketing team is measured on leads and not on lead quality, they will optimise for volume. Map the incentives. Then ask: where does this structure reward looking good over being good?
Frequently asked questions
What is critical thinking in the workplace?
Critical thinking at work is the ability to evaluate claims, arguments and proposals by looking beyond confidence, authority and social proof. It means asking for evidence, testing assumptions, and recognising the cognitive biases that make bad ideas sound convincing. It is not about being negative or cynical — it is about making decisions based on evidence rather than persuasion.
Why do smart people fall for bullshit?
Intelligence does not protect against bullshit. Smart people are often more skilled at constructing elaborate justifications for beliefs they already hold. The Dunning-Kruger effect shows that the least competent people are the most confident, while genuine experts express more doubt. Combined with authority bias and social proof, this creates environments where confident nonsense consistently outperforms careful analysis.
How can you tell if someone is bullshitting in a meeting?
Watch for these signals: vague claims that cannot be falsified, confident predictions without specific timelines or metrics, arguments backed only by anecdotes rather than data, resistance to being pinned down on specifics, and the use of jargon to obscure rather than clarify. The strongest signal is when someone becomes defensive rather than curious when challenged.
What is Goodhart’s Law and why does it matter for decision making?
Goodhart’s Law states that when a measure becomes a target, it ceases to be a good measure. In practice: when you tie bonuses to a KPI like Net Promoter Score, people will game the metric rather than improve the underlying reality. This is critical for decision making because the numbers you rely on may have been optimised for appearance rather than truth.
How do you build a culture of critical thinking in your organisation?
Start by rewarding people who challenge assumptions rather than punishing them. Introduce red team exercises where someone is tasked with finding the flaws in a proposal. Demand that all major decisions include a pre-mortem: imagine the decision failed, then work backwards to identify why. Make it safe to say “I do not know” and normalise expressing uncertainty in probabilities rather than false certainties.
Conclusion
Bullshit is not a communication problem. It is a behavioural design problem. The same System 1 shortcuts that help you navigate daily life — authority bias, social proof, the confidence heuristic — are precisely the mechanisms that let bullshit pass unchallenged through boardrooms, strategy sessions and hiring committees.
Critical thinking does not mean becoming a sceptic who challenges everything. It means building the habits and structures that compensate for the known weaknesses of human cognition. Test outcomes against counterfactuals. Demand precision over vagueness. Seek independent opinions. Check incentives. And above all, create environments where challenging bad ideas is rewarded, not punished.
Want to learn how to apply behavioural science to sharpen your thinking and your team’s decision-making? In the Behavioural Design Fundamentals Course you learn to use the SUE Influence Framework to understand what drives behaviour — including the behaviour of people trying to sell you bullshit. Rated 9.7 by 10,000+ alumni from 45 countries.
PS
At SUE our mission is to harness the superpower of behavioural psychology to help people make better choices. The greatest obstacle to better choices is not a lack of information. It is the overabundance of confident-sounding nonsense that crowds out careful thinking. The seven rules in this article will not make you immune to bullshit — nobody is. But they will make you significantly harder to fool. And in a world swimming in bullshit, that is a competitive advantage.