Only 26% of Dutch employees use AI daily in their work.1 That means 74% of the people in your organisation are largely leaving the tools you rolled out untouched. Not because the tools are bad. Not because employees are lazy or cannot see the future. But because AI adoption in organisations is not a technical problem - it is a behavioural challenge. And behavioural challenges are not solved with better software or more training.

In this article I will show you what is really happening with AI adoption in organisations, which psychological forces are holding employees back, and how - using insights from behavioural science - you can design an approach that actually sticks.

The AI adoption paradox: tools are rolled out, behaviour does not change

Most organisations have by now invested heavily in AI. Microsoft Copilot, GitHub Copilot, ChatGPT Enterprise, Gemini Workspace - the tools are there. There have been kick-off sessions, e-learnings sent out, training offered. And yet: six months later, AI usage in practice is structurally low. The dashboards show that perhaps 20 to 30% of employees are actively using the tools, and even then often hesitantly and superficially.

We recognise this pattern in virtually every AI journey we support. The tools are perfect; the psychological layer is missing. Organisations roll out technology and expect behaviour to follow naturally. But it does not. Behaviour change requires a fundamentally different approach than a product launch.

You do not change behaviour by working on the behaviour itself. You change the context within which the behaviour takes place.

This is not just a problem with AI. As I describe in my earlier blog post about why software adoption fails, the phenomenon of low adoption in large technology implementations is painfully structural. McKinsey research shows that more than 70% of large transformation programmes fail to meet their stated goals - and that human behaviour is consistently the bottleneck, not the technology.4 With AI there is one extra complicating factor: it touches how people think about themselves.

What holds employees back? An Influence Framework diagnosis

To understand why employees do not embrace AI, we use the SUE Influence Framework™. This model analyses four forces that determine whether people move forward or stay where they are.

The SUE Influence Framework with four forces: Pains, Gains, Comforts and Anxieties applied to AI adoption in organisations
The SUE Influence Framework™: two driving forces (Pains and Gains) and two restraining forces (Comforts and Anxieties). In AI adoption, the restraining forces are systematically stronger than the driving forces.

Let us apply the four forces to AI adoption in organisations.

Pains - the frustrations in current behaviour that create readiness to change - are actually abundantly present in AI adoption. Employees experience information overload, time pressure, repetitive tasks and inefficient work processes. In principle, these are precisely the problems AI can solve.

Gains - the positive consequences of the desired behaviour - are also clear: working faster, producing higher quality output, having more time for meaningful tasks, a competitive advantage. The business case for AI is there.

And yet behaviour does not change. Why? Because the restraining forces are systematically stronger.

Comforts - the positive aspects of current behaviour that keep people where they are - are particularly powerful in AI adoption. Employees are good at their jobs. They have built up years of expertise. Their work processes are established. They know how to deliver quality. "I have been doing this for ten years and my approach works" is an enormously strong comfort. There are also social comforts: the colleagues around you are not using AI either, so why would you?

Anxieties - the fears, doubts and barriers toward the new behaviour - are by far the biggest barrier in AI adoption. And the most underestimated.

The biggest barrier: competence anxiety

In virtually every AI adoption journey we diagnose, the number one anxiety is not "AI is going to take my job." That is the anxiety that gets all the media attention, but it rarely explains the behaviour. The real barrier is subtler and more personal:

Employees avoid AI not out of stubbornness - but out of fear of appearing incompetent in front of their colleagues while learning a new tool.

Every new behaviour brings feelings of uncertainty with it. That is normal. But in a professional environment there is an extra dimension: you bring that uncertainty literally to the meeting table. You ask AI to draft an email and the result is mediocre. You ask for a summary and it is not quite right. And that - that moment of incompetence - takes place in the visible field of your colleagues, your manager, perhaps your team.

This is what behavioural scientists call social performance anxiety. The pain of losing face is stronger for most people than the potential gain of the new behaviour. People are naturally strongly loss averse.2 And reputational loss is one of the greatest losses we can experience.

What you then see in organisations is a hidden pattern: people do experiment with AI - but at home, alone, in the margins. Never publicly. Never in a way that colleagues can see. This is why adoption rates remain low even when people are curious. The anxiety beats curiosity the moment social costs are attached.

Standard onboarding programmes never address this anxiety. They give employees prompts, show how the tools work, perhaps include exercises. But they do not design psychological safety for publicly practising something new. And without that safety, behaviour does not change.

What does work: AI adoption as behavioural design

The solution to low AI adoption in organisations does not lie in more training, better communication or pushing harder. That approach backfires - more pressure from above triggers psychological reactance: people dig their heels in even more firmly. As I describe in my blog post about reducing resistance to change, the judo approach to influence is always more effective than the karate approach.

The effective approach starts with designing behaviour using what we call the SWAC approach: Can, Want, Spark and Again.

The SWAC model for behaviour change: Spark, Want, Again and Can - applied to AI adoption
SWAC: four conditions that must be met for sustainable new behaviour. In AI adoption, it fails most often on ‘Can’ (too difficult) and ‘Again’ (no habit).

CAN: make it easier, not harder

"Simplicity eats willpower for breakfast." This is perhaps the most important principle in behavioural design, inspired by the work of BJ Fogg.3 If the new behaviour requires too much cognitive effort, it will always give way to automatic behaviour.

For AI adoption this means concretely: make it so easy that there are no excuses not to try. This means:

WANT: motivate at the right moment

Motivation for AI use is present more often than we think - but at the wrong moments. The January kick-off session is enthusiastic, but three months later that energy is gone and there is no longer any memory of the "why."

Effective motivation works at moments of high pain. When an employee is summarising the same meeting for the fourth time, or writing a report that looks very similar to last month's report - those are the moments when AI use resonates. Design motivational interventions specifically for those pain moments.

Moreover, social proof only works when it comes from the right source. Not from management, not from IT, but from respected peers - the colleague who everyone sees as excellent at their job, and who openly shares how AI helps them. That is the most powerful source of motivation for AI adoption. It neutralises the comfort of "I am good at my job without AI" by showing that the best people also use AI.

SPARK: design trigger moments

Behaviour always starts with a trigger. Without a trigger there is no behaviour, however motivated or capable someone is. This is one of the most underestimated insights in change programmes. We expect people to start on their own after a kick-off. But without designed trigger moments, nobody starts.

In AI adoption, triggers work best when they are linked to existing routines and high-pain moments. Design triggers such as:

Compare this with the approach Uber used to remove taxi anxiety: real-time tracking, fixed prices upfront, driver ratings. Each piece of anxiety required a specifically designed intervention. AI adoption demands the same: every barrier needs a specific answer, not a generic one.

AGAIN: build habits, not events

Behaviour change is not an event. It is a process of months. Research on habit formation shows that it takes 2 to 8 months for new behaviour to become automatic - not the well-known 21 days that circulate in popular self-help thinking.3

Most AI adoption programmes treat the launch as the endpoint. The tools go live, there is a kick-off, there are e-learnings - and then it is up to employees themselves. This is the surest path to failure. Behaviour needs repeated triggers over a longer period. Rituals, team habits, regular check-ins - this is the cement of sustainable behaviour change.

Further reading: why change programmes fail - the same patterns that derail large organisational changes also apply to AI adoption.

Three common mistakes in AI adoption

Based on the programmes we support, we see the same mistakes time and again. Not because organisations are unwise, but because the instinctive response to low adoption is the wrong one.

Mistake 1: Informing instead of designing

The most common mistake: deploying more communication to increase adoption. Newsletters, intranet posts, videos of the CEO explaining how important AI is. This is the inside-out approach: reasoning from the product rather than from the person. Information rarely changes behaviour. People have known for a long time that AI is useful. They still do not use it. The barrier is psychological, not informational.

Mistake 2: Mandating from above

When adoption disappoints, the second instinctive response is: mandate it. "AI use is now a mandatory part of your work process." This triggers what psychologists call reactance: when people feel their autonomy is being threatened, they actively resist the desired behaviour. They may then use the tools formally, but not in a way that adds value.

Autonomy is one of the strongest human motivators. Design space for autonomy into your AI adoption approach: give people a choice in how they use AI, not whether they use it.

Mistake 3: A single launch moment instead of repeated sparks

A kick-off, a training session, perhaps a second training two months later. This is the standard programme. But behaviour that is not fed with repeated triggers quickly fades. After the kick-off, usage drops week by week. After six weeks there is barely any measurable difference from the situation before the launch.

As described in the blog post about System 1 and System 2: the fast, automatic system in our brain drives the vast majority of our behaviour. New behaviour only becomes automatic (System 1) after much repetition over a longer period. Until then it requires conscious effort. And conscious effort always loses to habit.

Conclusion: AI adoption is a behavioural challenge

Organisations that take AI adoption seriously do not start with the tools. They start with the people. They diagnose which specific fears, habits and social thresholds are holding employees back. They design the context so that using AI is easier than not using it. They create psychological safety for publicly practising something new. And they keep going over a period of months, not weeks.

This is not a soft approach. It is the most strategic approach. Organisations that have the technical layer right but skip the behavioural layer fall behind organisations that design both layers. And the gap grows by the month.

The question is not whether your employees can learn to use AI. They can. The question is whether you have designed the context within which that behaviour can emerge and stick.

Frequently asked questions about AI adoption in organisations

How long does successful AI adoption in an organisation take?

Expect a minimum of six months for structural behaviour change - and twelve months for genuine embedding in daily working behaviour. Organisations that work to shorter timelines often measure peaks around launch events and mistake those for sustainable adoption. Habit formation requires time and repeated triggers, not one big push.

What is the best way to remove employee resistance to AI?

Start by diagnosing which specific anxieties exist in your organisation - these are almost always different from what management assumes. Then: tackle the biggest anxiety with a specific intervention. Competence anxiety calls for safe practice spaces and shared learning. Job anxiety calls for honest communication about the role of AI in the organisation's future. Generic "AI is good for all of us" messages never work.

Should we mandate AI use to accelerate adoption?

Almost never. Mandating triggers psychological reactance and leads to superficial, disengaged use. More effective is designing the situation so that using AI becomes the path of least resistance: give employees the tools, the safety and the trigger at the right moment. Behaviour that is internalised because it adds value sticks. Mandated behaviour stops the moment the pressure is removed.