How to use scenario-based learning to engage learners (without a big budget)

Most training scenarios fail because they're too predictable. Learn how to design stories that engage through ambiguity, emotion, and realistic workplace pressure.
A director's chair in a production set

You're building a course on workplace conflict. You write a scenario where someone clearly crosses a line, add three response options where one is obviously correct, and call it "realistic." But when learners click through, they're not learning how to handle conflict. They're learning how to spot the right answer in your multiple-choice question.

Most training scenarios fail because we design them for clarity when the actual workplace runs on ambiguity.

Stine Snekkenes, a learning designer and pedagogue with years of experience creating everything from Disney-branded Kahoots to corporate training in Norway, walked through how to build scenarios that engage through story rather than test-taking patterns. She covered why stories stick when information slides off, how emotional engagement drives application, and why the gray zones matter more than the black-and-white examples we default to designing.

The session above is from our recent workshop with Stine Snekkenes. Below are my takeaways on what this means for how you approach learning design when click-next courses fail to create lasting impact.

TL;DR

  • Stories work because they trigger emotional memory, not just cognitive processing
  • Obvious scenarios train pattern recognition, not decision-making under pressure
  • Gray zones and ambiguity create the cognitive load that builds actual competence
  • Active involvement through choice transforms passive consumption into applied learning
  • Simple text-based scenarios can be more effective than expensive video productions

1. Click-next courses fail because they ignore how memory works

Every L&D team knows the pattern. You launch a course, track completion rates that look decent, then watch as people make the same mistakes the training was supposed to prevent. The disconnect isn't that they didn't complete the content. It's that nothing stuck.

Stine opened by addressing what most of us see daily: text-heavy, click-next courses where learners hunt for the fastest path to completion rather than actual learning. She pointed to the fundamental question we avoid asking: where is the learning and knowledge retention in all of this? Do they remember anything days or weeks later?

The answer lies in how memory actually works. Stories activate different neural pathways than pure information delivery, which is why humans have used myths, fables, and legends to pass knowledge across generations long before PowerPoint existed. When you hear "once upon a time," your brain shifts into story mode, creating context and emotional anchors that information alone can't match.

But here's what most L&D teams miss: engagement isn't just about attention. It's about creating conditions where people actually apply what they learn.

Stine broke this down into three essential ingredients. First, content needs to be realistic—tied to actual workplace challenges, not sanitized case studies. Second, learners need to be active, making choices and driving their own path rather than passively consuming. Third, and this is where most training falls apart, there needs to be emotional appeal. We focus obsessively on the cognitive side while ignoring that humans make decisions through both thinking and feeling.

That third ingredient matters more than most instructional designers want to admit. You can have perfectly structured content that teaches the right information, but if there's no emotional resonance, it won't translate into changed behavior when someone faces that situation in real time.

The workplace doesn't operate like a training module. It operates like a story with competing priorities, time pressure, and incomplete information. Your training should reflect that reality.

Ways to build scenarios that mirror actual workplace conditions:

  • Design for time pressure and competing priorities by creating scenarios where the "right" answer depends on contextual factors like deadline urgency, stakeholder relationships, or available resources rather than universal best practices.
  • Include incomplete information that requires judgment calls by presenting situations where learners don't have all the data they'd ideally want and must make decisions based on partial information, just like real work.
  • Add emotional stakes through realistic consequences by showing how choices impact team dynamics, customer relationships, or project outcomes rather than just marking answers right or wrong.
  • Layer multiple correct approaches instead of single solutions by building scenarios where different strategies could work depending on priorities, requiring learners to articulate their reasoning rather than just select the "correct" option.

Want more insights like this? Become a Pro Member and unlock all our deep dive resources!

Become a Pro Member

2. Your scenarios are too obvious (and that's why they don't work)

During Q&A, someone asked Stine about common errors in storytelling. Her answer cuts to why so many scenarios feel like busywork: they're too obvious. Learners know where the story is going, spot the pattern, and mentally check out because they've seen this template dozens of times.

This connects to a deeper problem with how we approach instructional design. We're trained to create clarity, to remove ambiguity, to make learning paths straightforward. But that training instinct actively works against scenario effectiveness.

When everything is obvious, you're not building decision-making competence. You're building pattern recognition for your specific course format.

Stine gave the example of sexual harassment training, which she's designed scenarios for previously. This topic works perfectly for story-based learning precisely because it lives in gray zones. It's not about obvious violations where everyone agrees on the response. It's about navigating the uncomfortable middle ground where people aren't sure what to say, to whom, or how to approach the situation.

That ambiguity isn't a design flaw to fix. It's the actual learning opportunity.

Think about the difference between these two approaches: You could create a harassment scenario where someone makes an overtly inappropriate comment, give learners three response options where one is clearly professional and two are obviously wrong. Or you could create a scenario where someone makes a comment that might be innocent or might be testing boundaries, where the power dynamics are complicated, and where different response strategies each carry different risks and benefits.

The first version teaches people how to pass your quiz. The second version teaches people how to navigate the actual complexity they'll face.

Most training scenarios fail this test. We create situations where the answer is obvious because we're afraid of ambiguity, worried that learners will be confused or make the "wrong" choice. But confusion in a safe learning environment is exactly how people develop the judgment they need for messy real-world situations.

This is especially critical for topics that involve human interaction, judgment calls, or situations where context matters more than rules. Technical procedures with clear right/wrong answers are fine for straightforward instruction. But leadership communication, customer service, conflict resolution, ethical decision-making—these live in the gray zones where obvious scenarios actively undermine learning.

Tactics for creating effective ambiguity in your scenarios:

  • Build situations with competing valid priorities by designing scenarios where multiple stakeholders have legitimate needs that conflict, requiring learners to weigh tradeoffs rather than identify the single correct answer.
  • Include realistic workplace politics and relationship dynamics by adding context about team history, past conflicts, or organizational culture that shapes how different approaches might land.
  • Present options that are all partially correct by crafting response choices where each option has merit and drawbacks, forcing learners to think through implications rather than spot the obviously wrong answers.
  • Show consequences that aren't immediately obvious by revealing how choices play out over time, including second and third-order effects that weren't apparent in the initial decision moment.
  • Test scenarios with actual practitioners who work in gray zones by having people who do this work daily review your scenarios and point out where you've oversimplified or created false clarity.

3. Active involvement changes everything (but most courses fake it)

Stine emphasized putting learners in the driving seat, making them choose their path and be involved rather than passively consuming. She referenced Cathy Moore's work on engagement and action, but then went further into what active actually means beyond clicking buttons.

Most courses claim to be interactive because they include knowledge checks or branching scenarios. But that's not the same as genuine involvement.

Real involvement means learners are making decisions that matter, where different choices lead to meaningfully different outcomes and where they have to live with the consequences of their selections. It means they're thinking from someone else's perspective, navigating that person's constraints and priorities rather than just selecting from a menu of options you created.

This distinction matters enormously for learning transfer. Research on active learning shows engagement works when it requires genuine cognitive processing, not just interactivity for interactivity's sake. Clicking through branching paths doesn't automatically create deep learning if those branches don't require real thought about implications and tradeoffs.

Stine gave an example from the We Are Learning tool she works with, where learners can actually speak or type responses and AI routes them to appropriate branches based on what they said. That's active in a way that multiple choice questions aren't, because it requires formulating a response rather than evaluating pre-written options.

But you don't need sophisticated technology to create genuine involvement. What you need is scenario design that forces actual decision-making.

Consider these two approaches: A compliance training module that presents a policy violation, asks "What should you do?" and gives three options where one is correct. Versus a scenario that puts you in the shoes of a manager who discovered the violation, requires you to decide who to inform first, what to say, and how to balance investigation needs with team dynamics—then shows you how your choices ripple through the organization.

The first is a knowledge check disguised as a scenario. The second requires you to think through a complex situation from a specific perspective with competing priorities.

That's the shift from fake interactivity to genuine involvement. And it's what determines whether people can actually apply what they learned when facing the real version of that situation.

During the session, someone mentioned creating "choose your own adventure" style scenarios where healthcare workers might explore one path while retail employees explore another, letting curiosity drive exploration. That works because it respects that different contexts require different approaches and lets learners discover that through their choices rather than being told.

Approaches that create genuine cognitive engagement:

  • Design from a specific character's perspective with real constraints by giving learners a role with defined responsibilities, relationships, and limitations that shape what options are even available.
  • Require learners to prioritize competing goals under pressure by building scenarios where they can't satisfy everyone and must make conscious choices about what matters most in this specific situation.
  • Show how choices cascade through systems and relationships by revealing second and third-order consequences that emerge from decisions, not just immediate outcomes.
  • Let learners explore different paths without penalty by enabling replay and comparison of different choices so they can test assumptions and see how alternative approaches might have played out.
  • Build reflection points that ask learners to articulate reasoning by including moments where they explain why they chose a particular approach, making their decision-making process explicit.

4. Your production budget isn't the problem—oversimplified scenarios are

One of Stines' most practical points came during Q&A when she addressed the assumption that stories need to be complex, lengthy, or expensive to produce. She pushed back hard on this: a story doesn't need to be an animated video or anything elaborate. It can just be text from an employee describing their experience or how they felt in a situation.

This matters because the perceived complexity barrier stops most L&D teams from ever trying scenario-based learning.

You convince yourself that stories require video production, professional voice actors, branching scenario tools with licensing fees, or weeks of development time. So you default back to information delivery with knowledge checks because that's faster and cheaper to build.

But you're comparing the wrong things. The question isn't whether stories take more time than simple info delivery. It's whether they create better learning outcomes that justify the investment. And the answer is yes, by a significant margin, even when the stories are simple.

Stines' examples ranged from sophisticated 3D-animated characters on the We Are Learning platform to basic text-based scenarios. The sophistication of the production didn't determine effectiveness. What mattered was whether the scenario captured realistic complexity and required genuine decision-making.

Some of the most effective scenarios I've seen were built in Google Forms with simple "if this, then that" branching logic. No fancy authoring tools, no animation, just well-designed situations with meaningful choices and consequences that reflected real workplace dynamics.

The production value serves the learning goal, not the other way around. If a simple text-based scenario forces better thinking than a polished video that makes everything obvious, the text version is the superior learning experience.

This connects back to what makes stories work in the first place. People don't remember training because of production quality. They remember it because they felt something, made a difficult decision, or saw themselves in the situation. You can achieve that with minimal resources if you design for psychological engagement rather than visual polish.

The test isn't "would this win an eLearning award?" The test is "will people remember this next month when facing the actual situation?"

Most L&D teams would get better results by spending less time on production polish and more time on scenario authenticity. Interview people who actually deal with the situations you're training for. Capture their specific language, the details they notice, the tradeoffs they navigate. Then build the simplest possible scenario structure that preserves that authenticity.

Start small. Pick one module where click-next isn't working and convert it to a basic branching scenario. Test whether people can apply it better. Iterate based on what you learn. Then expand to other content areas where decision-making matters more than information recall.

Methods to implement scenario-based learning without massive resources:

  • Start with text-based scenarios using existing tools by building simple branching structures in Google Forms, Typeform, or your current LMS before investing in specialized scenario authoring platforms.
  • Capture authentic language from practitioner interviews by recording conversations with people who handle these situations daily and pulling their exact phrases, concerns, and decision factors into your scenarios.
  • Use stock photos or simple illustrations instead of custom animation by finding images that establish context and character perspective without requiring video production or 3D rendering.
  • Build modular scenario components you can recombine by creating reusable character profiles, common situation setups, and consequence templates that work across multiple scenarios in your content library.
  • Test effectiveness with small pilot groups before scaling by running basic scenarios with 10-20 learners to see if they demonstrate better application than previous versions, then refine based on actual performance data.

5. Emotional engagement is the missing ingredient (and it's not soft)

Stine made a point that most instructional designers are trained to ignore: beyond realistic content and active involvement, you need emotional appeal. She specifically called out that we tend to focus too much on the cognitive side while missing the human element that actually drives behavior change.

This isn't about making learning "feel good" or adding emotional manipulation to your scenarios. It's about recognizing that people make decisions through both thinking and feeling, and training that only addresses the cognitive side will fail when emotions enter the picture in real situations.

Think about the workplace challenges that generate the most training requests: difficult conversations, handling upset customers, addressing performance issues, navigating conflicts, making ethical decisions under pressure. Every single one of these is emotionally loaded. People struggle with them not because they don't intellectually understand what to do, but because the emotional weight of the situation interferes with execution.

Your training needs to account for that emotional dimension, not pretend it doesn't exist.

When Stine talked about putting learners in someone else's shoes, she was describing perspective-taking that includes emotional states, not just cognitive information. What does it feel like to be the manager having this conversation? What does it feel like to be the employee receiving feedback? What does anxiety, defensiveness, or anger do to how both people hear and respond to each other?

These aren't peripheral details to mention briefly then move on from. They're central to whether someone can actually use what you taught them when facing the real situation.

Most training treats emotions as obstacles to clear thinking rather than as integral parts of the decision-making context. We design scenarios that present information cleanly, give people time to think, and assume they'll apply rational decision frameworks. But real situations don't work that way. Real situations involve time pressure, incomplete information, emotional reactions from multiple people, and the need to respond before you've fully processed everything.

If your scenarios don't include that emotional dimension, you're training people for a sanitized version of work that doesn't exist.

This connects directly to why obvious scenarios fail. When everything is clean and clear, there's no emotional weight. No anxiety about making the wrong choice. No concern about damaging relationships. No pressure to balance competing needs. You've removed the very elements that make the real situation difficult.

The scenarios that stick are the ones where learners feel something. Where they're genuinely unsure what to do. Where they worry about the consequences of their choices. Where they can imagine being in that situation and feeling the weight of the decision.

That emotional resonance is what creates the memory anchor that makes learning transferable. Without it, you're just teaching information that will evaporate under the pressure of actual application.

Strategies to incorporate emotional dimensions into scenario design:

  • Include character emotional states explicitly in scenario setup by describing not just what's happening but how different people in the situation are feeling, what they're worried about, and what matters to them personally.
  • Show emotional consequences alongside task outcomes by revealing how choices affect relationships, trust, team morale, and individual confidence rather than just project success or failure.
  • Build time pressure and discomfort into decision points by creating scenarios where learners must respond without perfect information or where delaying the decision has its own consequences.
  • Use perspective shifts to reveal multiple emotional viewpoints by showing the same situation from different characters' points of view so learners understand how their actions land emotionally for others.
  • Create reflection moments that ask about emotional response by including questions like "How do you think this felt for the other person?" or "What emotions might be driving their reaction?" to make emotional processing explicit.

6. Testing reveals whether your scenarios actually work

Stine mentioned something in passing that deserves deeper attention: the value of disagreement. When someone asked what to do if the audience doesn't accept the story you created, her response was that disagreement can be valuable because it makes people involved and gets them reflecting.

This flips the typical instructional design mindset. We tend to view disagreement as a sign that our content missed the mark. But disagreement often signals that you've hit on something real and complex rather than simplified and obvious.

If everyone immediately agrees with your scenario and the response you position as effective, you might have created something too clean. The workplace situations that actually challenge people are the ones where reasonable people disagree about the best approach. If your scenario doesn't spark that kind of debate, it's probably not capturing real complexity.

Someone in the session made a great connection to quiz design: people remember the questions they got wrong more than the ones they got right. That same principle applies to scenarios. When learners disagree with your portrayed outcome or feel that the situation doesn't match their use case, they're engaging more deeply than when everything aligns perfectly.

The test of effective scenario design isn't whether everyone agrees. It's whether people can articulate why they'd approach it differently and whether that reflection helps them think more clearly about their own decision-making patterns.

This suggests a different approach to scenario validation. Instead of polishing until no one objects, test with actual practitioners and pay attention to where they push back. Those friction points often reveal the gray zones and contextual factors you need to build into the scenario more explicitly.

Stine also mentioned the idea of letting learners rewrite the story themselves—asking "How would you do this differently?" That's a powerful learning activity precisely because it requires them to think through the situation from a design perspective, considering what factors matter and what approaches might work better in their specific context.

The goal isn't to create scenarios everyone accepts. The goal is to create scenarios that make people think deeply about how they'd actually handle the situation and why.

This connects to why simple scenarios can be more effective than polished productions. When something looks professionally produced, people are less likely to critique or question it. When it's clearly a draft or prototype, they feel more permission to engage with it critically and suggest improvements. That critical engagement is often more valuable for learning than passive acceptance.

The most effective scenario-based learning often comes from iterative development where you test early, gather pushback and disagreement, then refine based on what those debates reveal about the actual complexity of the situation.

Methods to validate scenario effectiveness through practitioner feedback:

  • Test scenarios with people who do this work daily before finalizing by running draft versions past practitioners and specifically asking where the scenario misses important nuances or oversimplifies complex dynamics.
  • Pay attention to where people disagree with outcomes you've written by noting which scenario branches generate debate or pushback, as these often indicate important contextual factors you haven't made explicit enough.
  • Ask practitioners to describe how they'd handle the situation differently by including open-ended questions that let people articulate their own approaches rather than just selecting from your predetermined options.
  • Create space for learners to share their own similar experiences by adding reflection activities where people describe situations they've faced that connect to the scenario, building a library of real examples over time.
  • Track which scenario paths get taken most frequently by monitoring data on learner choices to identify patterns that reveal assumptions or gaps in your scenario design.

Closing

Stop designing scenarios that make everything obvious. The workplace doesn't operate that way, and your training shouldn't either. Start with one module where click-next isn't working, build a basic scenario that captures real ambiguity and emotional stakes, then test whether people can actually apply it better than what you had before.

The stories don't need to be complex. They need to be real.

💡
Please note: I utilized AI to help me brainstorm, research, structure, write, and enhance the content of this resource. While I aim to highlight key points and offer valuable takeaways, it may not capture all aspects or perspectives of the original material. I encourage you to engage with the resource directly to form your understanding and draw your conclusions.
About the author
Brandon Cestrone

Level Up With The Best L&D Resources

Join 10,000+ other learning professionals getting the latest insights, tools, and trends every week in their inbox.

EDU Fellowship

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to EDU Fellowship.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.