Your facilitation sessions feel great but change nothing.

New data from 714 facilitators reveals a field stuck measuring applause instead of outcomes. Here's what L&D professionals need to change about proving impact.
A high star rating with a large question mark casting doubt on the score

Your team walks out of the workshop buzzing. High-fives in the hallway. Someone says it was the best session they've attended all year. The facilitator nails a 4.8 out of 5 satisfaction score. Everyone feels great.

Three months later, nothing has changed. The same silos exist. The same decisions stall in the same meetings. The same people dominate the same conversations.

You've seen this before. Probably more than once.

SessionLab's 2026 State of Facilitation report surveyed 714 facilitators across 60 countries, and the data confirms what many L&D professionals already suspect: the field is measuring how sessions feel, not what they change. Only 1 in 3 facilitators has agreed on measurable indicators with their clients before the work begins. And 43.9% say their main reason for evaluating is to improve their own delivery. Not to prove impact. Not to justify the budget line. To get better at facilitation.

That's admirable. It's also not enough.

State of Facilitation 2026 – Report and Expert Insights
The 2026 edition of SessionLab’s State of Facilitation report focuses on Impact.

The 2026 State of Facilitation report above is from SessionLab. Below are my takeaways on what this data means for L&D professionals who commission, design, or deliver facilitated sessions and need to start connecting those sessions to business results.

TL;DR

  • Most facilitators measure satisfaction and engagement, not behavior change or business outcomes.
  • Impact assessment fails because goals aren't defined at the contracting stage.
  • The biggest barrier to the impact of facilitation is what happens after the session ends.
  • Facilitators and clients describe value in completely different languages.
  • AI has become a regular prep tool for facilitators, but live facilitation use remains low.
  • Experienced facilitators evaluate differently than beginners, and the gap is wide enough to matter.

1. Satisfaction scores are the comfort food of L&D evaluation, and they're hiding your real problem

The report found that 71.8% of facilitators measure participant satisfaction and 69.8% track engagement levels during the session. These are the two most common success indicators in the field.

Compare that to the 33.1% who have agreed on measurable performance indicators with clients before the session starts. Or the 19.1% who evaluate with the explicit goal of proving impact to stakeholders.

The gap between those numbers is where L&D credibility goes to die.

Melanie Martinelli, CEO at the Institute for Transfer Effectiveness, sees the disconnect clearly. Satisfaction has no correlation with application, she argues, citing research by Alliger and Janak dating back to 1989. You can love a session and never apply a single thing you learned. Chris Taylor at Actionable.co agrees, pushing for a clearer definition: most existing evaluation data measures how well a facilitator engages participants, not the change the facilitator creates for the organization.

This isn't news to most L&D professionals. You've probably quoted Kirkpatrick's levels in a dozen different presentations. But knowing the four levels and acting on them are different things. The report makes that painfully clear.

The problem isn't ignorance. It's inertia. Satisfaction surveys are fast to deploy, easy to interpret, and almost always produce positive results. They make everyone feel good. The facilitator feels validated. The sponsor sees smiling faces. The participants leave on a high. Nobody has to have the uncomfortable conversation about whether anything will stick.

This lands differently for L&D teams because you're often the ones commissioning facilitation work, which means you're setting the evaluation criteria. If you default to satisfaction scores because they're easy, you're building a reporting structure that will never justify your budget when someone eventually asks what all those workshops accomplished.

How to break the satisfaction dependency:

  • Require outcome definitions in every facilitation brief by adding a mandatory section to your intake process that asks: "What will be different 90 days after this session? How will we know?"
  • Replace at least one satisfaction question with a readiness question like "How confident are you that you can apply what you learned this week?" Confidence and intent to apply are better predictors of behavior change than enjoyment.
  • Add a 30-day check-in to every facilitation engagement, even if it's a five-question pulse survey. The data won't be perfect, but it shifts the conversation from "Did they like it?" to "Did anything change?"
  • Track repeat bookings alongside satisfaction scores because if a client keeps coming back, that's a market signal worth more than any 4.8 rating.

2. Impact measurement fails because nobody plugs in the destination before the trip starts

Fewer than 1 in 3 facilitators have agreed on measurable indicators with their clients. The report's expert contributors keep circling back to the same conclusion: most evaluation challenges stem from evaluation starting too late.

Chris Taylor uses an Uber driver analogy: the driver can't take you anywhere if you don't plug in the destination. Melanie Martinelli goes further, arguing that meaningful evaluation is only possible when you define where you're going before you leave. Without that clarity, you end up trying to measure impact after the fact, which is both unreliable and unnecessarily difficult.

The report calls this the "Return on Expectations" concept. Before you design a session, before you pick activities or build slides, you need answers to three questions: How did this initiative come about? What business metrics should it influence? And if the program achieves its goals, how will participants behave differently afterward?

Most L&D teams skip this step. Not because they don't know better, but because contracting conversations tend to focus on logistics. How many people? What date works? Can we do it in half a day instead of a full day? The conversation about outcomes gets crowded out by the conversation about calendar slots.

That's a design problem, not an information problem. And design problems have design solutions.

Take a step further than the report goes: the reason L&D professionals struggle with upfront outcome definition is that many facilitation engagements start as event requests, not change requests. A leader says, "We need a team offsite," or "Can you run a workshop on collaboration?" Those requests come pre-framed as activities. Reframing them as outcomes requires pushing back on the initial ask, which feels risky when the person making the ask is your stakeholder.

But consider what happens when you don't push back. You deliver a well-designed session with no agreed-upon success criteria. Six months later, someone in finance asks about the ROI. You have satisfaction scores and a few anecdotes. That's not a conversation you want to be in.

How to build outcome definition into your contracting process:

  • Add three "destination" questions to your facilitation intake form that mirror the ROE framework: What triggered this request? What business outcomes should it influence? What does success look like in participant behavior 60 days out?
  • Use a one-page impact agreement that both the facilitator and the sponsor sign before design begins, listing the 2-3 measurable outcomes the session is designed to support.
  • Refuse to finalize session design until outcomes are defined by framing this as quality assurance, not bureaucracy. "I want to make sure we're designing for the right outcomes" is a harder request to deny than "fill out this form."
  • Create a shared language for outcome types across your L&D team so everyone distinguishes between reaction outcomes (how it felt), learning outcomes (what they know), and performance outcomes (what they do differently).

3. The real session happens in the 90 days after everyone leaves the room

The report's most striking finding might be this: 43.5% of respondents said the main barrier to the impact of facilitation is a lack of follow-up conversations. Almost half of the facilitators are pointing at the same problem, and it's not about design or delivery. It's about what happens next.

47.6% of facilitators run their evaluations immediately after a session. Another 40.9% check in within one week. After that, the numbers drop fast. By the time you'd expect behavior change to show up (30, 60, 90 days out), almost nobody is looking.

Romy Alexandra, a Chief Learning Officer who specializes in behavior change, sees co-creation as the answer. Just as we co-create session designs for better results, we need to co-create the impact, she argues. And not just with participants. With the organization's leaders and decision-makers who control the environment participants return to after the workshop ends.

The environmental question isn't new. Kurt Lewin figured this out in the 1930s: behavior is a function of the person and their environment. You can give someone new skills, new mindsets, and new motivation. But if they walk back into the same meetings, the same incentive structures, the same leadership habits, the environment wins. Every time.

This creates a specific challenge for L&D teams. You're usually responsible for the learning experience but have limited influence over the work environment. The managers who need to reinforce new behaviors weren't in the room. The systems that could support practice haven't been updated. The follow-up that would cement learning isn't anyone's explicit responsibility.

The report suggests this is where experienced facilitators diverge from beginners. Experienced practitioners are significantly more likely to integrate follow-ups, debriefs, and pre-session needs analysis into their process. They've learned that the session is the middle of the story, not the whole thing.

For L&D professionals, this means the scope of a facilitation engagement needs to expand. A workshop isn't a two-hour event. It's a two-hour event with a pre-engagement, a follow-up cadence, and a stakeholder communication plan. If you're only budgeting for the time in the room, you're funding the least impactful part.

How to design for what happens after the session:

  • Build follow-up into every facilitation contract as a deliverable rather than an optional add-on. Include at least a 30-day pulse check and a 60-day stakeholder debrief in the scope of work.
  • Brief managers before the session happens with a one-page guide that tells them what participants will learn, what to watch for, and one specific question to ask in their next 1:1 to reinforce the content.
  • Create accountability structures inside the session,as well like commitment partners, public action plans, or calendar-blocked practice time that don't depend on the facilitator's continued involvement.
  • Track the "transfer rate" for your programs by comparing the number of action commitments made in sessions against the number implemented at 60 days. Even rough data creates a baseline you can improve on.

4. Facilitators talk about craft. Clients talk about meetings. The gap is costing both sides.

The report reveals a communication disconnect that should alarm anyone in L&D. When asked how they describe the value of facilitation, practitioners talk about alignment, inclusion, psychological safety, and the deep craft of designing productive group processes. When asked how clients describe that same value, the answers shift to concrete, surface-level language: good meetings, smoother discussions, decisions reached, conflict eased.

One anonymous respondent answered the value question with brutal honesty: "I don't. I probably should, but I don't."

The data on visibility makes this worse. 51.4% of facilitators rely primarily on word of mouth to communicate their impact. Only 17% publish articles. Only 15.1% create external reports. The visible footprint of facilitation is tiny compared to the amount of work happening.

Tim Leake, who's been facilitating executive workshops for over 17 years, thinks the profession has a language problem. Facilitators love to talk about what they do and how they work, he argues. But that's not the impact. That's how they get there. The value proposition needs to focus on the transformation: the group is currently "here," and we will get them to "there."

For L&D professionals, this communication gap has budget implications. When your CFO asks what facilitation does for the organization, you need an answer in language they recognize. "We created psychological safety" doesn't land as well as "We reduced decision-making time on the product roadmap from six weeks to two" or "Cross-functional teams resolved three blocked initiatives in a single day."

The report found that only 26% of facilitators highlight reaching business goals or KPIs when promoting their work. Most promote behavioral and collaborative shifts (40-63%), which are real and important, but don't translate into the ROI language required for budget decisions.

This is where L&D teams have an advantage over independent facilitators. You sit inside the organization. You know which metrics matter to leadership. You can bridge the gap between what facilitation creates (relational shifts, alignment, better thinking) and what the organization measures (speed, revenue, retention, decision quality).

But only if you build that bridge deliberately.

How to close the communication gap:

  • Create "impact translations" for your most common facilitation engagements that connect the facilitation outcome (e.g., "aligned on strategic priorities") to a business metric (e.g., "reduced project rework by streamlining cross-functional decisions").
  • Collect one quantified example from every major engagement, even if it's approximate. "The team resolved a backlog of 12 pending decisions in one session" is more powerful than "participants reported feeling more aligned."
  • Build a case study habit by writing up one facilitation success story per quarter in business language. Even if you never publish it externally, having it ready for budget conversations changes the dynamic.
  • Ask your stakeholders how they would describe the value after the session and use their language in your future proposals. Clients will sell facilitation better than facilitators will.

5. AI became a prep partner in 2025, and that's exactly where it should stay for now

Regular AI use among facilitators nearly doubled year-over-year, jumping from 21.7% to 38.7% in the "often" category. The "never" group dropped from 25.1% to 15.2%. AI isn't experimental anymore. It's becoming standard workflow.

But here's where the usage data gets interesting: 85.2% of facilitators use AI for session preparation. 67.5% use it for idea generation and brainstorming. 53.9% use it for post-workshop activities such as summarizing and creating reports. Only 23.5% use it during live facilitation. And ChatGPT dominates the tool landscape at 82.9%, far ahead of any facilitation-specific AI tool.

The report's sentiment analysis shows a "cautiously positive" field. Practitioners describe AI as a useful helper but flag concerns about reliability, generic outputs, and the risk of losing the human quality that makes facilitation work.

Myriam Hadnes, who holds a PhD in Behavioural Economics and runs a leadership training agency, takes a clear position: AI optimizes for speed and confirmation. It doesn't sense when someone's breathing has changed or interpret why a room has gone quiet. The embodied, intuitive work of reading a group in real time is where human facilitators are irreplaceable.

She also sees something deeper. Participants in 2026 aren't arriving at workshops ready to think. They're arriving exhausted, politically overwhelmed, running on stress hormones. Creating the conditions for regulated, productive collaboration is neurological work, not logistical work. And that's a job AI can't do.

For L&D teams, the practical takeaway is clear. AI should handle the repetitive, time-consuming parts of the facilitation workflow: drafting agendas, generating scenario variations, summarizing session outputs, and creating follow-up materials. That frees your facilitators (and your budget) for the relational work that drives impact.

The risk isn't that AI will replace facilitators. The risk is that non-facilitators will use AI to generate session designs, skipping human expertise entirely. One respondent flagged this directly: they're seeing non-facilitators use AI for session design and worrying that the craft is being lost. That concern is worth taking seriously, especially if your organization is looking to scale facilitation without investing in facilitator development.

How to integrate AI into your facilitation workflow without losing what matters:

  • Use AI for the 85% of prep work that doesn't require expertise, like drafting initial agendas, generating discussion questions, creating participant pre-reads, and building post-session summary templates.
  • Keep AI out of real-time facilitation decisions until the tools prove they can read group dynamics, not just engagement metrics. A Mentimeter poll is not a substitute for a skilled facilitator noticing that the room went quiet.
  • Build AI prompt libraries for your facilitation team that capture your organization's context, terminology, and preferred session structures so every facilitator isn't starting from scratch with generic prompts.
  • Establish quality gates for AI-generated session designs, with a human facilitator reviewing every AI-drafted agenda before it goes live, specifically checking for flow, pacing, emotional arc, and group dynamics that AI consistently misses.

6. Experienced facilitators evaluate completely differently, and your team might not be learning fast enough

The report identifies a maturity pattern with direct implications for L&D team development. Beginners measure success through immediate participant satisfaction. Intermediates start reusing agendas and applying structured feedback. Experienced facilitators adopt multi-dimensional evaluation practices, including long-term follow-ups, stakeholder interviews, and data review, that appear far more often in their responses than in any other group.

The co-facilitation data reinforces this. About 72% of beginners rely on unstructured post-session conversations to improve. Experienced facilitators are significantly more likely to use structured retrospectives (44% vs. 18% of beginners) and to review evaluation data together (23% vs. under 10% for beginners). They also stand out for consistently refining and reusing past session designs.

Anna Gullstrand, Chief People & Culture Officer at Mentimeter, sees co-facilitation as a bridge in this developmental journey. It moves practitioners from "How do I think it went?" to "What are we seeing together?" That shift from subjective self-assessment to shared evidence-based reflection is where professional growth accelerates.

For L&D leaders building facilitation capability, this data raises an uncomfortable question: how are your less experienced facilitators supposed to develop these habits if they're working alone, evaluating alone, and never seeing how a senior practitioner approaches evaluation?

The report also found that 47% of facilitators have no formal certification. The field has multiple entry points and learning pathways rather than a single unified route. That's liberating in some ways, but it also means there's no standard curriculum pushing practitioners toward impact evaluation earlier in their careers.

If you manage a team of facilitators or regularly contract with external ones, you're essentially managing a portfolio of practitioners at different maturity levels. The beginner who's crushing satisfaction scores might be completely lost on how to define outcomes upfront or follow up at 60 days. The experienced practitioner you hired for a single engagement might be running circles around your internal team's evaluation practices.

The developmental gap between "good session facilitator" and "impact-oriented learning partner" is real, and it doesn't close on its own. It closes through deliberate practice, structured reflection, and exposure to more mature evaluation habits.

How to accelerate evaluation maturity across your facilitation team:

  • Pair junior facilitators with experienced ones for at least two engagements per quarter with structured debriefs that focus specifically on "what did we learn about impact?" not just "what went well in the room."
  • Create an evaluation playbook for your team that establishes minimum standards: every engagement gets outcome definitions upfront, at least one follow-up touchpoint, and one data point beyond satisfaction.
  • Run quarterly "evaluation labs" where your facilitation team reviews real evaluation data together, discusses what the data says about impact, and identifies patterns across engagements.
  • Stop treating evaluation as the facilitator's solo responsibility and assign an L&D team member to own the follow-up cadence for major engagements, freeing the facilitator to focus on design and delivery.

The report's closing questions are worth borrowing. How can outcomes be defined more clearly? How might follow-up become part of the workflow rather than an optional extra? Where will the story of what changed be captured and shared?

These aren't facilitation questions. They're L&D strategy questions. And the data from 714 practitioners across 60 countries suggest that the answers aren't coming from the facilitation community alone. They need L&D professionals to bring evaluation rigor, organizational context, and business language that turn great sessions into measurable impact.

The tools and frameworks already exist. The Kirkpatrick model, Theory of Change, Return on Expectations. What's missing is the habit of using them before the session starts, not after it ends.

💡
Please note: I utilized AI to help me brainstorm, research, structure, write, and enhance the content of this resource. While I aim to highlight key points and offer valuable takeaways, it may not capture all aspects or perspectives of the original material. I encourage you to engage with the resource directly to form your understanding and draw your conclusions.
About the author
Brandon Cestrone

Level Up With The Best L&D Resources

Join 10,000+ other learning professionals getting the latest insights, tools, and trends every week in their inbox.

EDU Fellowship

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to EDU Fellowship.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.