The biggest bottleneck in behavioural design has always been time. Not the time it takes to design an intervention, but the time it takes to understand what is actually going on inside someone's head. Six behavioural interviews. Transcription. Analysis. Spotting the patterns. Mapping the pains, gains, comforts and anxieties. In a Behavioural Design Sprint at SUE, that phase alone used to take a week. Sometimes more.
AI has changed that. Not by replacing the human insight work, but by accelerating it dramatically. What used to take a week now takes a day. And that is not a marginal improvement. That is a structural shift in how deep you can go and how fast you can act on what you find.
This article is about that shift. Not about the general promise of AI, but about five specific things AI does for behavioural insight work, and why the combination of artificial intelligence and behavioural science is more powerful than either one alone.
The insight bottleneck
The SUE Influence Framework starts with a simple premise: before you can design behaviour, you need to understand what is driving it. That means finding the Job-to-be-Done, mapping the pains in current behaviour, surfacing the anxieties that block new behaviour, identifying the comforts that keep people stuck, and understanding the gains that could pull them forward.
The way we have always done this is through deep qualitative interviews. Six interviews with your target group, mapped carefully and analysed thoroughly. The quality of the analysis determines the quality of everything that follows. A shallow analysis leads to shallow interventions. A shallow intervention changes nothing.
The problem was never the method. The problem was the sheer amount of time it consumed. By the time you had transcribed six interviews, coded the themes, spotted the patterns and translated everything into a clear forces map, half the sprint week was gone. And that left less time for the most important part: designing the actual interventions.
AI solves exactly this. Not by making the analysis automatic, but by making the time-consuming parts fast so that you can spend your energy where it actually matters.
Application 1: interview transcription and first-pass analysis
You run six interviews. Each one an hour. You have six hours of conversation full of signals about pains, anxieties, comforts and jobs-to-be-done, buried under small talk, tangents and people saying what they think you want to hear.
You used to transcribe these by hand. Or outsource it. Either way: slow and expensive. Tools like Otter.ai, Whisper or Grain now transcribe audio in minutes with accuracy that is genuinely good enough for analysis. That alone saves hours.
But the bigger shift is what happens next. You can paste those transcripts into a model like Claude or GPT-4 and ask it to do a first-pass behavioural analysis. Not "summarise this interview." Something far more specific: Map the pains the speaker expresses in their current behaviour. What are the frustrations they mention? What do they avoid? What do they say they wish was different?
The model will not get everything right. But it surfaces the signals fast, and it gives you a starting point for your own deeper reading. What used to take an afternoon now takes twenty minutes.
Application 2: pattern recognition across multiple interviews
Six interviews produce a lot of material. The skill of a behavioural researcher is finding the pattern that runs through all six, not just the loudest theme in the most memorable interview.
This is where recency bias and availability heuristic do a lot of damage to insight work. You finish your last interview and that person's story is vivid in your mind. Their anxiety becomes the anxiety. Their pain becomes the pain. The quieter signals from earlier interviews fade.
AI has no recency bias. Feed it all six transcripts and ask it to find the themes that appear across the majority of conversations, and it will give you a genuinely cross-interview pattern. It will notice that three people mentioned the same friction in passing without it being the main topic of the conversation. A human analyst would likely miss that.
We use this at SUE to build a richer forces map faster. The AI gives us a first draft of the pattern. We then interrogate that draft, argue with it, add our own reading of the material. The result is better than what either one of us would produce alone.
Application 3: generating and stress-testing hypotheses
One of the hardest moments in a Behavioural Design Sprint is the transition from insight to hypothesis. You have a forces map. You understand the pains and anxieties of your target group. Now you need to hypothesise: what intervention could actually move the needle?
Teams get stuck here. They keep producing solutions that look like their existing product features in a slightly different wrapper. Or they jump too quickly to a single hypothesis and fall in love with it before they have stress-tested it.
AI is very good at generating a wide range of alternatives fast. Give it your forces map and ask it to generate ten possible interventions that specifically address the anxiety you have identified. It will produce options you would not have thought of. Some of them are useless. Some of them are the starting point for something genuinely good.
The even better use is stress-testing. You have a hypothesis you like. Ask the model to argue against it. What would need to be true for this to fail? What is the strongest objection someone from the target group would raise? This is something teams rarely do well when they are emotionally invested in their own idea. The model is not invested. It will argue against you honestly.
Application 4: behavioural literature research
Every Sprint involves some degree of desk research. What do we already know about this behaviour? Is there existing research on this pain? Are there proven interventions for this type of anxiety?
The behavioural science literature is vast. Kahneman, Thaler, Cialdini, Fogg, Ariely, Sunstein, Dolan. Journals like Behavioural Public Policy, Journal of Behavioral Decision Making, decades of nudging experiments from the UK's Behavioural Insights Team. Staying across all of it is impossible.
AI can synthesise this literature fast. Not perfectly, and you should verify anything that matters, but as a starting point for research it is genuinely useful. You ask: What does the literature say about the relationship between social norms and recycling behaviour? And you get a structured summary in two minutes instead of an hour of reading.
More useful still: you can ask the model to suggest relevant behavioural mechanisms for the specific pattern you have found in your interviews. It connects your empirical finding to the theoretical underpinning. That connection used to require a senior researcher. Now it is a prompt.
Application 5: rapid intervention prototyping
The final application is the one closest to the actual design work. Once you know what force you want to address, you need to design the intervention. What exactly does the nudge look like? What does the message say? What is the choice architecture?
AI is a fast partner for this kind of rapid prototyping. You describe the anxiety you want to remove, the target group, the channel, the context. You ask for ten different versions of the intervention at varying levels of directness. You get material to react to immediately, rather than staring at a blank page.
The key word is react to. The AI's output is raw material, not finished design. The skill of the behavioural designer is in reading what the model produces and knowing which of the ten options has the right psychological logic, which ones miss the point, which one has a seed of something genuinely interesting. That judgment is human. The generation is AI.
The AI generates. The behavioural designer judges. Neither one alone is enough.
What AI cannot do
There is one thing AI genuinely cannot replace in behavioural insight work, and it is worth being specific about what that is.
When you sit across from someone in a behavioural interview and they pause before answering a question about their behaviour, that pause is data. The slight flush of embarrassment when they describe a habit they know is irrational. The moment they start to justify themselves without being asked. The story that starts one way and ends somewhere completely different. These are the signals that reveal the real unconscious driver, the one the person themselves may not be aware of.
No AI reads that. No AI can. That is why the six interviews are not optional. That is why you cannot replace them with a survey or a chatbot. Behavioural design starts with understanding the human being behind the behaviour. That requires a human on the other side of the conversation.
What AI does is everything around that conversation: the preparation, the transcription, the analysis, the synthesis, the hypothesis generation, the literature lookup, the intervention design. All of that is now faster. The human work, the actual listening, has become more valuable as a result.
The combined advantage
I have been running Behavioural Design Sprints for over fifteen years. The method has not changed. You still start with the Job-to-be-Done. You still map the four forces. You still interview real people. You still design interventions that address the specific force that is holding behaviour in place.
What has changed is the speed at which you move through each of those phases. And speed, in behavioural design, is not just an operational advantage. It changes the quality of what you produce. When you spend less time transcribing and coding, you spend more time thinking. When you have a first-draft forces map in an hour instead of a day, you have more time to challenge it. When you can generate and stress-test ten hypotheses before lunch, you go into the afternoon with better raw material.
The combination of AI and behavioural science is not about AI doing the behavioural thinking for you. It is about AI doing the time-consuming work that supports the behavioural thinking, so that you can do more of the thinking that actually matters.
The insight work gets faster. The insight itself gets deeper.
Where to start
If you want to start applying this, I would suggest starting with the interview analysis. Run your next round of qualitative research using a transcription tool. After each interview, paste the transcript into a language model and ask it to map the pains and anxieties the speaker expresses. Compare its analysis with your own reading. Notice where it catches things you missed, and where your reading catches things it missed.
That comparison is itself a learning. It teaches you what AI is good at seeing, and what you are good at seeing that it cannot. Once you understand that distinction, you can combine them intelligently.
The applications of AI in behavioural work are far broader than most people have discovered yet. The professionals who understand both the behavioural science and the AI tools will be the ones who produce the best insight work in the years ahead. Not because they have more data. Because they have more clarity about what the data means.
PS
At SUE we have made the integration of AI and behavioural science central to how we teach the Fundamentals Course. Not as an add-on, but as part of the core method. Because the method becomes significantly more powerful when you know how to use both. If you want to experience that combination in practice, the online course is the fastest way to get there.