I watched Amanda Wing present her session and something clicked when she mentioned building a "cult" around retrospectives at The Iconic. Not because cults are good (they're not), but because her team was so obsessed with improving their programs that it became contagious across the organization.
Most L&D teams I know are drowning in feedback requests, stakeholder demands, and the constant pressure to prove ROI. You launch a leadership program, send out a survey, get some generic feedback, and move on to the next fire drill. Sound familiar?
Amanda's approach flips this script entirely. Instead of broad "what should we stop, start, continue" retrospectives that tell you nothing actionable, she created a focused framework that dissects your programs piece by piece.
Above is the recording from the L&D Shakers APAC session. Below are my takeaways. *This video is recommended for its educational value and is not an original work of EDU Fellowship. All rights belong to the original creators.
The missing ingredient in your L&D strategy
Amanda started with something most teams skip: defining what your L&D "bakery" stands for. She's not talking about corporate values plastered on walls. She means the operational principles that determine how you design every single program.
Here's what this looks like in practice. When Amanda worked at Canva, one of their core values was making complex things simple. This wasn't just a nice phrase - it shaped every design decision. Instead of creating six e-learning modules for connection training, they focused on community-building between sessions.
✅ Your audit: List your team's current programs. For each one, identify which underlying principle drove the design decisions. If you can't articulate this clearly, you've found your first problem.
The reality is that without clear principles, every program becomes a Frankenstein of random features. You add gamification because someone read an article. You include videos because stakeholders like them. You create assessments because that's what programs have.
Your move: Run a 30-minute team session using Amanda's template. Define your "why" (values) and "how" (principles). Don't make this academic - connect each principle to a specific design choice you'd make differently.
Break the "stop, start, continue" trap with component-focused retrospectives
Traditional retrospectives are too broad to be useful for L&D programs. Amanda's donut model changes this by focusing on five specific components: content (the do), engagement (toppings), impact (feeling), innovation (flex), and delivery (packaging).
Instead of asking "what should we improve about our leadership program," you ask targeted questions about each component. For the content component: "Which areas could be simplified for better understanding?" For engagement: "How could the learning environment better foster participation?"
This approach revealed gaps Amanda's team missed for two and a half years in their onboarding program at Canva. They were so focused on hiring new people and delivering the same process that they stopped gathering component-specific feedback. Critical gaps in new processes and organizational changes went unnoticed.
Your reality check: Most L&D teams do retrospectives at the wrong time - after programs end when memories are fuzzy and stakeholders have moved on. Amanda recommends pulse checks during programs and component-focused sessions that can happen in 15-minute team meetings.
→ Pick one current program
→ Choose two components from the donut model
→ Write three specific questions for each component
→ Schedule a 30-minute focused retrospective next week
The engagement component isn't about fun activities - it's about workflow integration
When Amanda talks about "toppings" (engagement), she's not referring to icebreakers or gamification. She means the elements that connect learning to actual work performance. The key insight: motivation in organizational learning isn't about badges or rewards - it's about career impact.
Her team discovered this when analyzing why completion rates varied dramatically across different programs. The high-performing programs weren't more entertaining; they were better integrated into participants' daily workflows and career goals.
❌ Stop thinking: "How do we make this more fun?"
✅ Start asking: "How does completing this help them do their best work?"
This connects to the broader challenge of remote and hybrid work environments. Your retrospectives need to examine whether learning fits into people's actual work patterns. If 90% of feedback mentions technology barriers or timing conflicts, you're not designing for your learners' reality.
Your move: For your next program retrospective, interview three participants about their typical workday. Map where learning fits (or doesn't fit) into their flow. Use this data to redesign engagement touchpoints.
Connect learning impact to business outcomes through people leaders
The "feeling" component of Amanda's model focuses on measurable impact, not learner satisfaction. She emphasized partnering with people leaders because they see behavioral changes that L&D teams miss from Learning Management System data.
This approach transforms how you measure success. Instead of relying on Kirkpatrick Level 1 (reaction) surveys, you build systematic feedback loops with managers who observe skill application daily. Amanda's teams at Canva would regularly connect with people leaders to gather evidence about performance changes.
🫰 The overlooked opportunity: Most L&D teams avoid people leaders because they're busy. But managers are your most reliable data source for behavioral change - they just need the right framework to provide useful feedback.
Set up quarterly 15-minute conversations with key managers using specific questions:
- Which skills from recent training are you observing in practice?
- What performance gaps remain after training completion?
- How are team dynamics changing based on new behaviors?
Your framework:
- Before training: Establish baseline performance metrics with managers
- During training: Monthly pulse checks on skill application
- After training: Quarterly impact reviews tied to business outcomes
This systematic approach helped Amanda's teams track onboarding effectiveness over two and a half years, identifying patterns that survey data never revealed.
Innovation doesn't mean new technology - it means strategic experimentation
Amanda's innovation component addresses the pressure many L&D teams feel to adopt the latest tech trends. Her insight: innovation should align with your learners' digital literacy and organizational context, not industry hype.
She gave the example of healthcare environments with low digital literacy. Virtual reality training might sound innovative, but practical guides and explainer videos create more impact. Innovation means finding better ways to achieve learning outcomes, not implementing flashy solutions.
Your strategic filter:
- Does this innovation solve a real learner problem we've identified?
- Do our participants have the digital skills to use this effectively?
- Will this create measurable improvement in our core metrics?
The best innovations often come from cross-industry inspiration rather than EdTech vendors. Amanda draws from design thinking (Double Diamond method) and marketing (user personas) to create more targeted learning experiences.
✅ Your experiment: Pick one underperforming program component. Research how other industries solve similar problems. Test one small change for 30 days and measure the specific impact.
Design your measurement strategy during learning design, not after launch
This insight emerged when Amanda discussed embedding evaluation processes upfront rather than scrambling to prove impact after programs launch. Most teams design learning experiences first, then figure out how to measure success.
Amanda flips this sequence. During the strategy phase with stakeholders, she maps success metrics to organizational goals and performance outcomes. This prevents the common trap of measuring easy metrics (completion rates) instead of meaningful ones (skill application).
Her approach connects to the Kirkpatrick model levels, but with L&D-specific focus:
- Level 1: Knowledge and awareness through targeted questions
- Level 2: Skill application through manager observation and peer feedback
- Level 3: Behavioral change through performance metrics
- Level 4: Business impact through organizational outcome tracking
Your implementation plan:
- Strategy phase: Define success metrics before design begins
- Design phase: Build measurement touchpoints into the learning experience
- Launch phase: Activate manager partnerships and feedback systems
- Retrospective phase: Use component-focused analysis to improve
The teams Amanda works with now spend 30% of their design time on measurement strategy. This upfront investment eliminates the scramble to prove impact and creates data-driven improvement cycles.
Turn your L&D team into strategic partners through contextual business intelligence
The overarching theme in Amanda's approach is shifting from order-takers to strategic consultants. When stakeholders request specific training programs, she asks detective-level questions to understand the real business challenge.
This investigative approach revealed that nine times out of ten, stakeholders don't need the program they're requesting - they need something different entirely. By gathering context about organizational goals, performance gaps, and strategic direction, L&D teams position themselves as problem-solvers rather than content creators.
Your detective framework:
- What business outcome is driving this request?
- How does this connect to organizational strategic goals?
- What performance evidence supports this training need?
- Who else is impacted by this challenge?
- What changes if we don't address this?
Amanda's teams became so effective at this contextual intelligence that they evolved from reactive service providers to proactive strategic partners. Leaders started involving them in business planning conversations rather than just execution requests.
Your move: For the next three training requests, spend twice as long on discovery as you normally would. Document the business context and present alternative solutions based on your findings. Track how this changes your stakeholder relationships over six months.
The donut model isn't just another framework to add to your toolkit. It's a systematic approach to building continuous improvement into your L&D team's DNA. Start with one component, run focused retrospectives, and use the insights to create programs that measurably impact your organization's success.
Most importantly, stop measuring everything and start measuring what matters. Your stakeholders will notice the difference.