This article is the sixth in a series on preparing L&D for the AI-powered workforce, and highlights how a robust foundation enables real-time assistance directly in the flow of work.
In this playbook, we have outlined a critical path to rethinking, rescoping, and reimagining enterprise learning in the AI era. We’ve discussed the need to create a disciplined Governance Framework that provides essential guardrails for AI and why and how to create Capability Academies that serve as hubs for building deep, foundational skills. We see these as powerful parts of the engine for building AI-powered workforce readiness. Yet, they alone do not solve the final, most critical challenge: closing the execution gap in the last mile between formal knowledge and real-world performance.
The pace of workplace transformation can impose a significant cognitive load on employees that often results in constant context switching, memory decay, and information overload, which means that knowledge acquired through just-in-case training has an ever-shorter half-life. To achieve true business agility, a just-in-time performance support layer is no longer an advantage—we see it as an operational necessity. Solving the last mile challenge requires more than simply procuring new tools. It demands a fundamental rethinking of how we architect workforce enablement. This strategic imperative has given rise to a new, more powerful operating model for continuous capability building.
Breaking the silo: moving to an agentic performance engine
For decades, training, assessment, and performance support (core functions of workforce enablement) have operated in costly and inefficient silos. Performance support is grounded in the principle of just-in-time enablement. It delivers relevant information, tools, or guidance at the moment of need, so employees can take action without stepping away from the work itself. Common applications of performance support include:
- Embedding support seamlessly into the tools, applications, or processes that workers use daily. For example, retail associates opening a store or an airline worker conducting a step-by-step procedure.
- Enabling successful execution of tasks so the worker can improve their on-the-job performance. For example, troubleshooting an equipment malfunction by consulting an interactive guide or accessing real-time product knowledge documentation to support a customer.
- Ensuring the support is designed with the user’s experience, needs, and workflow in mind to minimize cognitive load and reduce reliance on memory. For example, acknowledging how the work gets done and providing simple, relevant, and effective access to reinforcement and quick answers.
As work evolves, it will rely more on teams, yet we typically focus on skilling individuals. As AI better understands the context of the work, it helps improve and highlight the skills of each individual, enhancing the ability for teams to dynamically distribute expertise more effectively and improve their overall performance. AI provides the opportunity to shift from a narrow focus on individual capability to an approach that adapts performance support to team-based contexts when appropriate. It enables a new, integrated paradigm we call the agentic performance engine. Architecturally, this engine acts as an intelligent layer above the enterprise tech stack. From this umbrella vantage point, it can observe and interpret the context of work as it unfolds across otherwise disconnected systems, from the CRM to internal messaging platforms. Experientially, it is delivered directly into the employee’s flow of work. It’s not another application to open; it is a single, consistent performance sidekick that provides seamless support within the user’s immediate environment. Its defining characteristic is its native ability to fuse its core functions into a single interaction, enabling true cross-functional performance.
This fusion expands traditional testing to create a continuous, passive process of assessment that provides a real-time, multi-dimensional view of capability. It validates skills based on actual task performance, identifies where support is needed, tracks progress over time, and can correlate employee actions to business KPIs. This authentic, in-the-workflow validation then personalizes both training and performance support, creating a truly adaptive and integrated operating system for workforce performance.
The new capability: architecting performance
The agentic performance engine is a system to be designed, integrated, and continuously improved. This new class of solution requires a new role for L&D: the Performance Architect. Their mandate moves beyond traditional instructional design to become a hybrid systems thinker, user experience designer, and strategic change agent. Key responsibilities include:
- Diagnosing performance barriers. Collaborating directly with business units to identify the precise points of friction in critical workflows where employees get stuck, make errors, or lose momentum.
- Designing intelligent interventions. Architecting the AI-driven support and learning experiences that solve these friction points in the moment of need.
- Curating enterprise knowledge. Partnering with subject matter experts to ensure the knowledge that feeds the AI engine is accurate, relevant, and structured for optimal retrieval.
- Driving human adoption. Designing and executing the human-centered strategy for user adoption, building trust in the system through transparent communication, and hybrid models that blend AI with human oversight.
The evolutionary deployment model: a pragmatic path to the vision
A vision for such a powerful system can seem daunting. In some contexts, broad implementation can be challenging. We recommend a pilot implementation and iterate the vision. This evolutionary deployment model unfolds in three manageable, value-creating steps, based in part on Trish Uhl’s AI Maturity Continuum:
- Crawl. The journey begins not with an enterprise-wide overhaul, but with a single, focused project designed to test the systemic and cultural efficacy of the model. The CLO, a sponsoring business leader, and a technology partner identify a critical workflow and then deploy a minimum viable agent to work on the problem and deliver a measurable win against a business KPI. The outcome delivers a case study that provides the inputs and ROI needed for further investment or further study.
- Walk. Leverage the findings from step one to secure funding and authority to build the central, reusable infrastructure. This hub is the core of the performance operating model, comprising the reusable AI platform, the formal governance processes managed by the AI Accelerator Group, and the dedicated team of Performance Architects. This phase is about abstracting the learnings from the pilot project into a scalable, enterprise-grade asset.
- Run. With the hub established, the organization systematically scales the solution across the enterprise using a hub-and-spoke model. The central hub provides the platform and standards, while each functional capability academy acts as a spoke, owning the responsibility to curate the domain-specific knowledge that makes the agent intelligent for their teams. This creates a virtuous cycle where the central platform becomes more powerful with each new spoke that is added.
The strategic prize: a new business model for L&D
This evolutionary deployment enables the ultimate strategic prize: the transformation of L&D from a support function into an internal performance-as-a-service provider. This new value creation engine fundamentally changes the economics and strategic positioning of workforce development.
This evolutionary deployment enables the ultimate strategic prize: the transformation of L&D from a support function into an internal performance-as-a-service provider.
Here’s how: funding is no longer a top-down overhead allocation. Instead, it is secured through direct, ROI-driven investments from business units, justified by a specific business case tied to that unit’s KPIs. This transforms the conversation between the CLO and their C-suite peers from one of cost containment to one of co-investment in value creation. The efficiency dividends created by the system — the measurable savings in time and cost — are then reinvested to improve the core engine, creating a self-funding performance flywheel. This model elevates the CLO to a true business peer, managing a portfolio of value-creating services and reporting on their direct impact to the bottom line.
The ethical imperative and the role of governance
The power to analyze in-workflow performance carries with it significant responsibility. Without a robust ethical framework, the agentic performance engine risks becoming a tool of surveillance, eroding the very trust it needs to function. Principles of transparency in data use, employee consent, and clear anonymization standards are non-negotiable prerequisites. To ensure consistency and prevent a new form of ethical fragmentation, the governance of these solutions must fall under the direct purview of a centralized oversight body, which we call the AI Accelerator Group. This cross-functional team, responsible for the strategic governance of all enterprise AI, ensures that the same rigorous standards applied to large-scale platforms are also applied to these in-workflow tools. Their proximity to the employee makes their responsible and ethical implementation even more critical.
Next steps for executives
The journey begins with a Crawl. To launch your first project:
- Identify the pain and form a coalition. Find a critical business process causing recognized pain and partner with its executive owner and a technology leader. This is your founding team.
- Scope the minimum viable agent and define the KPI. Ruthlessly define the simplest solution for the highest-value problem and the single business metric it will improve.
- Build the investment-grade business case. Co-author the business case with your coalition partners, framing it as a direct investment to move the chosen KPI.
The future of organizational agility is not built in a classroom; it is forged in the workflow. The performance-as-a-service operating model provides a credible, strategic path to get there. It integrates a visionary product, a pragmatic process, and a sustainable business model to finally close the gap between knowledge and action. We have now architected a powerful, centrally-governed system for individual and team performance. Our sixth and final action will address the critical question of scale and autonomy: How do we distribute control, empowering departments to innovate within this framework to meet the unique demands of their business?
In the final article of this series, we’ll tackle the next frontier: how to distribute centralized control without losing alignment—empowering departments with the autonomy to innovate, while keeping the entire enterprise moving in the same strategic direction.
About the author
Brandon Carson
Brandon Carson is a globally recognized leader in learning and currently serves as Chief Learning Officer at Docebo. He has held prominent roles such as CLO at Starbucks, where he led their global Learning Academies and the Future of Work practice, and Vice President of Learning and Leadership at Walmart, where he was responsible for global leader development and corporate onboarding. Brandon is the author of Learning In The Age of Immediacy: Five Factors For How We Connect, Communicate, and Get Work Done and L&D’s Playbook for the Digital Age, both from ATD Press. He is also the founder of L&D Cares, a nonprofit that offers no-cost coaching, mentoring, and resources to L&D professionals, empowering them to grow and thrive in their career.