Registration for Inspire 2024 is now open!

Register now

Buyer’s Guide to AI Learning Products

01

Contributors

Michael Chong
Senior Data Scientist (Docebo)

Rebecca Chu
Machine Learning Analyst (Docebo)

Vince De Freitas
Product Marketing Manager, Content & AI (Docebo)

Maija Mickols
AI Product Manager (Docebo)

Giuseppe Tomasello
Vice President of AI (Docebo)

Renee Tremblay
Senior Content Marketing Manager (Docebo)

02

How to use this guide

Artificial Intelligence (AI) has become increasingly essential for L&D teams seeking to enhance training programs, engage employees, drive organizational success, and stay competitive. But are all AI learning products created equal? Probably not. 

So how do you differentiate between what’s hype and what’s actually helpful? 

That’s where this guide comes in. The Buyer’s Guide for AI Learning Products is intended to assist L&D professionals as they navigate the learning AI landscape, offering valuable insights, and helping you make informed decisions when considering the purchase of AI solutions for your learning initiatives. 

Whether you’re exploring AI’s potential to personalize learning experiences, automate administrative tasks, or optimize content delivery, this guide will provide you with a roadmap to identify and select AI tools that align with your organization’s unique learning and development needs.

03

The evolving landscape

AI is quickly becoming an integral part of business. According to recent research, 35% of global companies report using AI in their business.

But it isn’t just availability of AI (or the novelty) that’s driving this rapid adoption. AI is quickly becoming a necessity due, in part, to two macro trends that are significantly impacting the workforce.

  1. The relationship between the population growth rate and unemployment
  2. Drastically changing demands and expectations of L&D teams

Let’s start with the annual population growth rate—or, more specifically, the lack thereof.

Global birth rates have dramatically declined. According to The Economist, the largest 15 countries (by GDP) all have a fertility rate below the replacement rate, which means people are aging out of the workforce faster than we can replace them.

It used to be that there were more unemployed people than job openings. But over the past few years, that paradigm has flipped, and now there are more open jobs than people to fill them. All this adds up to a labor shortage, which leads to increased competition for skilled workers.

Demographics aren’t the only thing dramatically evolving. The learning landscape is also changing. 

It’s become much bigger.

Ten years ago, the majority of use cases for training were internal (e.g. onboarding, talent development, compliance, etc.). That meant fewer programs being delivered to a smaller number of learners. Today, more organizations are extending learning outside of the enterprise. Docebo’s internal customer data reflects this evolution.

This isn’t unique to our organization. It’s indicative of a larger trend. Studies, including a recent one from Brandon Hall Group, show that more than 50% of organizations deliver learning to external, non-employee groups. These include customers, channel partners, distributors, value-added resellers, and franchisees. 

Once training extends outside of the organization, the number of learners (along with the programs and content needed to support them) grows exponentially. 

04

The promise of Artificial Intelligence (AI)

Fewer people. More work. That’s our current reality. And it’s a huge part of what’s making AI such an appealing solution.

As the diagram below shows, AI is outpacing human performance in many areas, including image recognition, reading comprehension, and language understanding. This presents organizations with a solution to offset talent shortages, while increasing efficiency by offloading a lot of the repetitive, uncreative tasks.

AI outperforms humans

Forrester predicts that enterprise AI initiatives will boost productivity and creative problem-solving by 50%. Not only can incorporating AI into the business help mitigate the labor shortage, but it can also deliver competitive advantage. According to 2023 data, AI saves an employee 2.5 hours per day on average. 

While AI solutions hold a great deal of potential and promise, it’s important to remember that an AI product for one part of the business might not be helpful for another. For example, an AI-powered contract review solution might increase productivity and efficiency for your Legal team, but it isn’t going to do much to help the Learning and Development team. In other words, not just any AI solution will do. You’ll need purpose-driven AI products across your organization, and L&D teams will need learning AI solutions.

Did you know

In 2008, developers created a chess-playing AI called Stockfish and taught it the rules and strategies of the game. Running at full power, it’s nearly impossible for a human to win against Stockfish. In December 2017, a competing AI called AlphaZero was created. This AI was never taught the strategy, but simply trained itself by observing a massive number of chess games and playing practice games against itself for a few days. With equal computational power behind each model, AlphaZero beats Stockfish consistently.

05

What is AI for Learning?

When we talk about “AI for Learning,” or “Learning AI,” we’re talking about AI products, models, and agents made specifically to support instructional designers, L&D teams, and other learning professionals. In the larger context of AI, today’s learning AI products generally fall under the Natural Language Processing (NLP), speech, and vision branches—all of which make use of machine learning.

When evaluating learning AI solutions, a more helpful and perhaps practical way to look at learning AI is in terms of what it does—the tasks it can carry out. More specifically, what tasks it can do that are relevant to your L&D organization. That’s where the true opportunity lies. 

Maybe you’re having a hard time hiring Instructional Designers, so your team is falling behind in content creation. Perhaps the demands on your team outpace your production capacity. You could have a skills gap on your team, so you’re unable to analyze data. Or maybe you’re scaling your programs or business and require translation capabilities. Ultimately, a learning AI should help you solve a particular problem. Therefore, your decision should be problem driven, not product driven.

Here’s a tip

A good approach is to treat a potential AI purchase like you would a job candidate. (You’re not buying a product; you’re hiring help!) You wouldn’t hire someone just because they’re well-dressed and available. So you shouldn’t buy an AI product because it’s smart and shiny. Like humans, AI tools have limitations. With candidates, you uncover these limitations during the interview process. The product evaluation is your opportunity to suss out potential limitations in the AI and make sure your processes include checks and balances that mitigate risks associated with those limitations. 

06

Basics concepts of Learning AI

In order to make an informed decision when buying learning AI products, there are a few basic concepts you should be familiar with. 

01

AI models

An AI model is the core component of an AI system that is responsible for the actual processing and decision making. It’s designed to perform a specific task (e.g. image recognition, natural language understanding, recommendation, etc.). AI models are the engines behind the solution. They take input data, process it, and produce an output, such as a prediction or classification.

AI models are trained on massive datasets to identify patterns and information relevant to their task. Training is a resource-intensive process that typically occurs before the model is deployed. Even the most popular AI models today have gaps in their training data.

Because AI models can be complex, containing numerous parameters, layers, and algorithms (particularly in deep learning models like neural networks), deploying them at scale can be resource-intensive and may require powerful hardware, driving up the cost to generate content for every prompt or request.

Thankfully AI models are just a part of the ever-changing landscape. Most AI learning products will leverage additional strategies and technologies, like AI agents, to improve upon general AI models to deliver more specialized, powerful, and agile solutions.

02

AI agents

An artificial intelligence agent refers to a specialized AI-powered program designed to operate autonomously while influenced by user-input. AI agents leverage large-language models, like those used by OpenAI's GPT (and others), to understand a user's intention before accomplishing tasks and objectives. Agents can adapt to new information or changes in the scenario its task is based in.

Unlike traditional AI models, which are generally static and limited to the tasks for which they were explicitly programmed, AI agents are dynamic and flexible. Moreover, they can be given access to tools (such the ability to query a data source, or to interact with other AI models) that they decide autonomously if and when to use.

AI agents are designed with a degree of autonomy that allows them to operate in unpredictable or changing environments. They can make decisions, alter their course of action, and even learn from the outcomes of these decisions. This adaptability is crucial in fields like robotics (like self-driving cars), where agents must navigate real-world environments, or in virtual learning environments for tasks such as learner interactions, where they interact with humans in fluid, often unpredictable conversations. A key feature that differentiates AI agents from models is their ability to engage in goal-oriented behaviors. They not only process and respond to inputs but also actively work towards achieving objectives. When we think about the opportunities for adaptive, personalized learning, AI agents will likely play a huge role in working directly with learners—adapting to gaps in learning that are identified through interactions with the AI agent.

In the context for learning, an AI agent can be designed to focus on remediation when supporting team members who are off-track. They can power virtual chat or video roleplay scenarios that create a safe space for repeated practice, with a partner who never tires and can adapt as learners improve. In cases where optimization may be central, an AI agent can be trained to specialize on aggregating and understanding large sets of data and look for opportunities to tighten processes or stages towards a final goal or KPI. If focused on mastery or certification, AI agents can be developed to drive learners through stages of learning towards true proficiency, using pedagogically-sound frameworks like Bloom’s Taxonomy or backwards design.

As is the case with all interactions with AI, agents, their work and their outputs should be monitored and contextualized by humans, especially in high-stakes scenarios and tasks.

01

AI models

An AI model is the core component of an AI system that is responsible for the actual processing and decision making. It’s designed to perform a specific task (e.g. image recognition, natural language understanding, recommendation, etc.). AI models are the engines behind the solution. They take input data, process it, and produce an output, such as a prediction or classification.

AI models are trained on massive datasets to identify patterns and information relevant to their task. Training is a resource-intensive process that typically occurs before the model is deployed. Even the most popular AI models today have gaps in their training data.

Because AI models can be complex, containing numerous parameters, layers, and algorithms (particularly in deep learning models like neural networks), deploying them at scale can be resource-intensive and may require powerful hardware, driving up the cost to generate content for every prompt or request.

Thankfully AI models are just a part of the ever-changing landscape. Most AI learning products will leverage additional strategies and technologies, like AI agents, to improve upon general AI models to deliver more specialized, powerful, and agile solutions.

02

AI agents

An artificial intelligence agent refers to a specialized AI-powered program designed to operate autonomously while influenced by user-input. AI agents leverage large-language models, like those used by OpenAI's GPT (and others), to understand a user's intention before accomplishing tasks and objectives. Agents can adapt to new information or changes in the scenario its task is based in.

Unlike traditional AI models, which are generally static and limited to the tasks for which they were explicitly programmed, AI agents are dynamic and flexible. Moreover, they can be given access to tools (such the ability to query a data source, or to interact with other AI models) that they decide autonomously if and when to use.

AI agents are designed with a degree of autonomy that allows them to operate in unpredictable or changing environments. They can make decisions, alter their course of action, and even learn from the outcomes of these decisions. This adaptability is crucial in fields like robotics (like self-driving cars), where agents must navigate real-world environments, or in virtual learning environments for tasks such as learner interactions, where they interact with humans in fluid, often unpredictable conversations. A key feature that differentiates AI agents from models is their ability to engage in goal-oriented

behaviors. They not only process and respond to inputs but also actively work towards achieving objectives. When we think about the opportunities for adaptive, personalized learning, AI agents will likely play a huge role in working directly with learners—adapting to gaps in learning that are identified through interactions with the AI agent.

In the context for learning, an AI agent can be designed to focus on remediation when supporting team members who are off-track. They can power virtual chat or video roleplay scenarios that create a safe space for repeated practice, with a partner who never tires and can adapt as learners improve. In cases where optimization may be central, an AI agent can be trained to specialize on aggregating and understanding large sets of data and look for opportunities to tighten processes or stages towards a final goal or KPI. If focused on mastery or certification, AI agents can be developed to drive learners through stages of learning towards true proficiency, using pedagogically-sound frameworks like Bloom’s Taxonomy or backwards design.

As is the case with all interactions with AI, agents, their work and their outputs should be monitored and contextualized by humans, especially in high-stakes scenarios and tasks.

Levels of learning design automation

As you introduce AI into your organization, it’s important to think about the intersection between AI and human-driven learning design. As the table illustrates, as you seek deeper levels of automation, there’s a sliding scale between the human and the machine as it relates to matters of control, personalization, and outputs.

On the left, you’ve got a heavy hand of control against the system that you’re using. Actions and decisions will be driven by the content or instructional designer—and the enhancements and automation from AI will be limited at best. As you move across the scale to the right however, a greater degree of action taking and decision making is released to the AI system. In return, you get a much greater depth of personalization, automation, time savings, and scalability of your learning program.

The benefits of deeper automation go beyond just the items listed above. For most functions within a business, there always exists a certain amount of repetitive, manual bits of work or tasks that actually interfere with productivity. This could be summarizing or reorganizing data. It could be writing quiz questions. It could be that one time you had to spend hours and hours relabelling metadata because it messed that one table up.

Toil is an invisible, insidious part of your work life that artificial intelligence can manage with ease. If you spend 20% of your day wrestling with tedious toil, AI can help you regain that time and help you get ahead or, sometimes, just catch up.

None of that happens at Level 1 on the table above, though. To regain your time and delegate to AI, you must first understand the problem that you’re trying to solve, and then find a solution (like machine learning-powered content curation, data analysis, AI-based skills mapping, or Generative AI content creation tools) that addresses that problem.

Speaking of Generative AI, now would be a good time to unpack what it is and how it works at a high level.

07

Generative AI (GenAI) explained

Prior to Generative AI, artificial intelligence typically specialized in recognizing and predicting things. It could have been turning the squiggles from a scanned document into editable text, or taking an audio file and transcribing words from the sound waves. On smartphone keyboards, basic types of artificial intelligence could attempt to predict or guess your next word. With Generative AI however, artificial intelligence creates things based on patterns it recognizes in source data. 

Did you know

Generative AI is able to write a sonnet in the style of William Shakespeare. Shakespeare was incredibly consistent, and it helps that he put his name on most of the sonnets he wrote. GenAI can also create art in the style of Vincent Van Gogh. This opens up a wide array of exciting possibilities, but also ethical considerations around the copyright of the artists whose work ends up used in these AI models without explicit consent.

At its core, GenAI leverages deep learning AI models to study vast libraries of data. This could be every image and piece of written text in the public domain, all public-facing social media, news articles, and more. As AI companies feed more data into their models, the accuracy and reliability of these models increases. As it intakes data, it begins to recognize patterns. These patterns can include things that humans take for granted, or feel like we just know

It could include the way light is supposed to bounce off a person’s eye in a photograph. It could be the predictable structure that nouns and verbs fall into in most sentences. It may even be the proper way to pronounce “emphasis.” These are the types of patterns that we don’t think of consciously, but will immediately recognize if something isn’t the way we expect it. After training on millions, if not billions of documents, images and videos—it turns out AI can recognize those patterns too, especially if everything you’ve given it is consistent in how it does those things. 

Generative AI interprets the consistency of these patterns as an expectation of what humans do, and expect to see. In a lot of ways, it just wants to give us what we want. We simply have to be intentional about the data used to train AI, when it processes it all, and at the end of all of that, we often get something really, really close to what we expected or could have done ourselves. 

Here’s a tip

Not sure if the image you’re looking at is real or AI-generated? If there’s a person in it, check out their hands. Reference data for hands is typically limited in non-specialized image-generation models and are often one of the first things to look off, or funky. If an image features text or logos, take a closer look to see if they appear unnatural. AI-generated text can appear pixelated or stretched, and logos may be altered.

08

Potential challenges with AI solutions

AI can feel a bit like magic. You can get results without knowing or understanding how the AI arrived at them. That’s not ok. This isn’t Oz. We need to pay attention to what’s happening behind the curtain—or, in the case of AI, what’s happening inside the black box.

01

The black box problem

As French philosopher Bruno Latour pointed out, “the more science and technology succeed, the more opaque and obscure they become.” This is especially true of machine learning, where often even the AI’s designers can’t explain why or how the AI arrived at a decision.

This lack of insight is referred to as “the black box problem.” It refers to the challenge of understanding and interpreting the decision-making processes of complex machine learning models, as they often operate with intricate internal mechanisms that are difficult to explain or predict.

The are a few key aspects at the heart of the black box problem:

Lack of transparency: Deep neural networks that power Generative AI, for instance, consist of many layers and parameters (often billions). Understanding how each parameter influences the final output is often unclear. (Even if the system explained itself, you wouldn't have time to understand its decision making before you die!) This lack of transparency makes it difficult to determine how the model arrived at a specific decision.

Complexity: Machine learning models can capture intricate patterns and relationships in data, but these patterns may be too complex for human understanding. The models might use features or connections that are not immediately obvious to humans (like implicit biases or omissions inherent in the collected data).

Difficulty in troubleshooting: When an AI model makes an error or a biased decision, it can be challenging to identify the root cause, leading to difficulties in debugging and improving the system.

Ethical concerns: In certain applications where AI is used to make critical decisions (like finance, healthcare, or legal matters), the black box problem raises ethical concerns. Stakeholders should not trust decisions made by models they cannot interpret or explain.

Addressing the black box problem is an ongoing area of research and development in AI. Researchers are working on methods to improve model interpretability and transparency, such as creating techniques to visualize model behavior, feature importance, and decision rationales. This is particularly important for ensuring the fairness, accountability, and transparency of AI systems in real-world applications. (We’ll dive into this in the Frameworks for successful AI section.)

02

Hallucinations

AI has come far. But it’s far from perfect.

AI models can generate or produce information, content, or outputs that are not based on actual data or real-world knowledge. This is known as a hallucination and can range from minor inconsistencies to factual inaccuracies to complete fabrications. Because they’re generally coherent and grammatically correct, hallucinations can be very convincing and can be mistaken for fact or believed to be accurate.

Hallucinations in AI models can occur due to various factors, including errors or biases in training data, model complexity, a lack of control and supervision during training, or the prompt itself. In natural language processing, for example, language models sometimes generate text that appears coherent but is entirely fabricated and lacks factual basis. For instance, a language model might generate a paragraph of text that describes a fictional event or scenario as if it were real.

03

Bias in AI

Bias in AI refers to the presence of unfair and unjust prejudices in the decisions or predictions made by artificial intelligence systems. These biases can arise from various sources and can lead to discriminatory or unequal outcomes.

While the stakes might not be as high in learning as they are in healthcare or criminal justice, they can still undermine your programs and can manifest in the following ways:

Training data bias: One of the primary sources of bias in AI is biased training data. If the data used to train an AI model contains biases (e.g. historical, sampling, proxy, etc.) or reflects existing societal inequalities and stereotypes, the model is likely to learn and perpetuate those biases. Designing a system so that it produces gender balanced examples (regardless of the existing gender split in any examples it's trained on) would counteract this.

Algorithmic bias: Certain algorithms may inherently favor one group or produce biased results due to the way they process data. A facial recognition system, for example, might be less accurate for people with darker skin tones, or a voice recognition system might not work as well with regional dialects.

Labeling bias: The people responsible for labeling may unconsciously introduce their biases when tagging images or text. As discussed earlier, AI simply wants to meet the expectations of humans as defined by the patterns in the source data. Therefore any biases represented in the human tagging will be carried through in the statistically-based output of the AI system. It's learned to hold exactly the same amount of bias as a human, which can be problematic.

Feedback loops: AI not only learns from its dataset, but from the interaction it has with humans. If users consistently engage with or provide positive feedback on biased content, the AI system may adapt and reinforce these biases. In other words, AI systems that interact with users may learn biases from user interactions.

In the context of AI for learning, bias is a huge risk to be aware of. No two learners are the same. There's a level of effective AI implementation that brings in a degree of personalization based on what’s unique about each learner, while also leveraging pooled data within your learning platform. The combination of these two strategies can allow AI to adapt to a learner's needs on an individual level.

Mitigating bias in learning AI is critical to ensure that systems are equitable, ethical, and impartial in their decision making and that the learning experience is personalized to the individual (and not prejudiced against them).

01

The black box problem

As French philosopher Bruno Latour pointed out, “the more science and technology succeed, the more opaque and obscure they become.” This is especially true of machine learning, where often even the AI’s designers can’t explain why or how the AI arrived at a decision.

This lack of insight is referred to as “the black box problem.” It refers to the challenge of understanding and interpreting the decision-making processes of complex machine learning models, as they often operate with intricate internal mechanisms that are difficult to explain or predict.

The are a few key aspects at the heart of the black box problem:

Lack of transparency: Deep neural networks that power Generative AI, for instance, consist of many layers and parameters (often billions). Understanding how each parameter influences the final output is often unclear. (Even if the system explained itself, you wouldn't have time to understand its decision making before you die!) This lack of transparency makes it difficult to determine how the model arrived at a specific decision.

Complexity: Machine learning models can capture intricate patterns and relationships in

data, but these patterns may be too complex for human understanding. The models might use features or connections that are not immediately obvious to humans (like implicit biases or omissions inherent in the collected data).

Difficulty in troubleshooting: When an AI model makes an error or a biased decision, it can be challenging to identify the root cause, leading to difficulties in debugging and improving the system.

Ethical concerns: In certain applications where AI is used to make critical decisions (like finance, healthcare, or legal matters), the black box problem raises ethical concerns. Stakeholders should not trust decisions made by models they cannot interpret or explain.

Addressing the black box problem is an ongoing area of research and development in AI. Researchers are working on methods to improve model interpretability and transparency, such as creating techniques to visualize model behavior, feature importance, and decision rationales. This is particularly important for ensuring the fairness, accountability, and transparency of AI systems in real-world applications. (We’ll dive into this in the Frameworks for successful AI section.)

02

Hallucinations

AI has come far. But it’s far from perfect.

AI models can generate or produce information, content, or outputs that are not based on actual data or real-world knowledge. This is known as a hallucination and can range from minor inconsistencies to factual inaccuracies to complete fabrications. Because they’re generally coherent and grammatically correct, hallucinations can be very convincing and can be mistaken for fact or believed to be accurate.

Hallucinations in AI models can occur due to various factors, including errors or biases in training data, model complexity, a lack of control and supervision during training, or the prompt itself. In natural language processing, for example, language models sometimes generate text that appears coherent but is entirely fabricated and lacks factual basis. For instance, a language model might generate a paragraph of text that describes a fictional event or scenario as if it were real.

03

Bias in AI

Bias in AI refers to the presence of unfair and unjust prejudices in the decisions or predictions made by artificial intelligence systems. These biases can arise from various sources and can lead to discriminatory or unequal outcomes.

While the stakes might not be as high in learning as they are in healthcare or criminal justice, they can still undermine your programs and can manifest in the following ways:

Training data bias: One of the primary sources of bias in AI is biased training data. If the data used to train an AI model contains biases (e.g. historical, sampling, proxy, etc.) or reflects existing societal inequalities and stereotypes, the model is likely to learn and perpetuate those biases. Designing a system so that it produces gender balanced examples (regardless of the existing gender split in any examples it's trained on) would counteract this.

Algorithmic bias: Certain algorithms may inherently favor one group or produce biased results due to the way they process data. A facial recognition system, for example, might be less accurate for people with darker skin tones, or a voice recognition system might not work as well with regional dialects.

Labeling bias: The people responsible for labeling may unconsciously introduce their biases when tagging images or text. As discussed earlier, AI simply wants to meet the expectations of humans as defined by the patterns in the source data. Therefore any biases represented in the human tagging will be carried through in the statistically-based output of the AI system. It's learned to hold exactly the same amount of bias as a human, which can be problematic.

Feedback loops: AI not only learns from its dataset, but from the interaction it has with humans. If users consistently engage with or provide positive feedback on biased content, the AI system may adapt and reinforce these biases. In other words, AI systems that interact with users may learn biases from user interactions.

In the context of AI for learning, bias is a huge risk to be aware of. No two learners are the same. There's a level of effective AI implementation that brings in a degree of personalization based on what’s unique about each learner, while also leveraging pooled data within your learning platform. The combination of these two strategies can allow AI to adapt to a learner's needs on an individual level.

Mitigating bias in learning AI is critical to ensure that systems are equitable, ethical, and impartial in their decision making and that the learning experience is personalized to the individual (and not prejudiced against them).

Did you know

According to a recent study, 86% of users surveyed have experienced AI hallucinations when they use chatbots like ChatGPT and Bard. Despite this, 72% still trust the AI.

09

Frameworks for successful AI

There’s no quick fix for the black box problem, hallucinations, bias, and other issues inherent in AI, but thankfully there are a number of smart people and organizations working on this problem. While developing AI-powered solutions, the Docebo team dug deep into our own research and found these principles, techniques, and frameworks to be helpful when thinking about effective, learner-centric, and reliable solutions.

01

Inspectable, explainable, overridable

In the U.S. Department of Education, Office of Educational Technology’s report, AI and the Future of Teaching and Learning, the authors outline criteria for “good AI models” and introduce the idea that effective AI for learning needs to be three things: Inspectable, explainable, and overridable.

Inspectability: Refers to a user's ability to monitor the inner workings of how an AI is making decisions and creating outputs.

Explainability: Refers to the ability to provide human-readable explanations for the decisions made by AI models.

Overridability: Refers to a user's ability to replace or remove elements of an AI's output or decision making to have a greater amount of control over the final output.

Together, these principles of AI product development provide the transparency and control humans need to be able to validate and adapt the AI model’s decisions, if needed. If a user is expected to release control over actions and decisions to automation, these principles help to preserve overall control over the system’s output. So you can benefit from AI without having to have blind faith in the system.

02

Retrieval Augmented Generation (RAG)

Another way for developers to enhance the reliability of an AI model’s output is with a strategy called Retrieval Augmented Generation (RAG).

RAG is an AI framework that uses a trusted knowledge base to enhance Large Language Models (LLMs). It retrieves accurate facts to improve the generation process and ensures that the context is grounded in the latest information.

Remember: the parameters and data initially fed into an AI model are static, and to update them, the model must be retrained. RAG allows language models to skip retraining, and provide access to up-to-date, reliable information. This approach not only enhances the trustworthiness of the information provided but also reduces the likelihood of creating false or misleading content, making it a valuable technique for applications where factual accuracy and context is paramount.

The most effective, and personalized AI systems for learning will connect to and pull from your company’s internal and external knowledge base to ensure learning content is not only comprehensive, but also specific to your business. But, for this to be effective, you need to get your house in order because, as the saying goes, garbage in, garbage out.

A re-emphasis on Knowledge Management

Most L&D professionals already understand the vital importance of knowledge management—yet, it may sometimes feel like an uphill battle to build cross-functional alignment across your business around what is essentially a company-wide project.

The output of any AI model is only as good as the quality of the data. And when the data being referenced by RAG and your AI solution is your company’s internal and external knowledge base, knowledge management becomes mission critical for everyone in the organization.

The referenced knowledge base serves as the backbone for fact-checking and validation, allowing the model to generate accurate and reliable information. If this knowledge base (your organization’s knowledge) is old, incomplete, or inaccurate, the RAG process can lead to the dissemination of incorrect or outdated information.

If you’re investing in an AI product that will reference your knowledge base, knowledge management will be critical for ensuring the quality and relevance of the output.

Consider building (or rebuilding) a cross-functional team around knowledge management in your business. Your team should involve the following groups, and any others unique to your business who are critical in defining truths:

  • Product
  • Product Marketing
  • Legal
  • Enablement
  • Knowledge Management / Help Desk team
03

Risk mitigation through pedagogy

Say, for example, that you adopt an AI for learning whose model has been trained on the majority of the internet and the free (and maybe some premium) courses available out there. You’ve got to ask yourself a few uncomfortable questions:

  • Are these courses even good, or pedagogically sound?
  • Does the training data and patterns feeding the AI represent the bar I set for my learning programs?
  • Can you ever know whether a course is effective without also seeing how learners performed and retained afterwards?

When it comes to AI for learning, GenAI’s greatest strength can also represent one of its greatest vulnerabilities. When a model relies on the patterns present in its training data, it can’t make context-rich decisions that disagree with those patterns. A library of learning theories does not make one a practitioner of learning, or a master facilitator.

Without also understanding specific learner outcomes after a training program, Generative AI’s pattern-based understanding of how to create learning content relies only on whether a course is theoretically effective, and most L&D professionals know how quickly a learning program can fall off track when the learner is not central to its design.

When pedagogy meets AI

A built-in pedagogical model or framework is the linchpin of effective learning AI. It bridges the gap between AI’s technological capabilities and educational effectiveness by aligning the AI model with established educational theories and practice. It provides the foundational principles and frameworks necessary for creating effective, engaging, and impactful educational experiences. Which is important, because the more impactful the learning experience, the more impact learning will have on your business. (Because, as we all know, learning doesn’t just drive personal growth—it also drives business growth.)

Embedding pedagogical strategies can enhance the quality of the learning experience by providing tailored, goal-aligned, engaging, measurable, and personalized learning content and guide the AI processes to ensure learning objectives are met efficiently and effectively.

04

Human-in-the-loop (HILT) systems

There are likely a number of low-stakes tasks that we should fully hand over to artificial intelligence, and never look back. We’ve always done this with technology. (Nobody manually operates the city’s telephone switchboard anymore, do they?)

However, there continue to be a number of significant tasks and functions that will always require a human in the loop (HILT). If we look outside of learning to something as fundamentally important as food, we see that AI is already present in agriculture. From weed detection to growth analysis to health monitoring, machine learning models are helping augment and improve the way that we grow food and feed our people. But that doesn’t mean that we don’t or won’t still need farmers to make these crucial decisions on critical systems. Learning is similarly critical, and regardless of how dramatically artificial intelligence improves, it will always require a human-in-the-loop system.

AI systems make predictions and decisions, but they rarely do it with 100% confidence or absolute certainty. In fact, the concept of absolute certainty is very human. That’s why HILT systems and processes are critical. We play a central role in handling the nuances and contextual factors that AI may not fully grasp, ultimately striking a balance between AI-driven insights and human wisdom.

The most powerful AI solutions won’t eliminate the need for human interaction. They’ll enable it at every stage so we can validate and oversee the output (in the broader context in which the AI model is operating), provide feedback and directions, and apply ethical and moral judgment to the decision.

01

Inspectable, explainable, overridable

In the U.S. Department of Education, Office of Educational Technology’s report, AI and the Future of Teaching and Learning, the authors outline criteria for “good AI models” and introduce the idea that effective AI for learning needs to be three things: Inspectable, explainable, and overridable.

Inspectability: Refers to a user's ability to monitor the inner workings of how an AI is making decisions and creating outputs.

Explainability: Refers to the ability to provide human-readable explanations for the decisions made by AI models.

Overridability: Refers to a user's ability to replace or remove elements of an AI's output or decision making to have a greater amount of control over the final output.

Together, these principles of AI product development provide the transparency and control humans need to be able to validate and adapt the AI model’s decisions, if needed. If a user is expected to release control over actions and decisions to automation, these principles help to preserve overall control over the system’s output. So you can benefit from AI without having to have blind faith in the system.

02

Retrieval Augmented Generation (RAG)

Another way for developers to enhance the reliability of an AI model’s output is with a strategy called Retrieval Augmented Generation (RAG).

RAG is an AI framework that uses a trusted knowledge base to enhance Large Language Models (LLMs). It retrieves accurate facts to improve the generation process and ensures that the context is grounded in the latest information.

Remember: the parameters and data initially fed into an AI model are static, and to update them, the model must be retrained. RAG allows language models to skip retraining, and provide access to up-to-date, reliable information. This approach not only enhances the trustworthiness of the information provided but also reduces the likelihood of creating false or misleading content, making it a valuable technique for applications where factual accuracy and context is paramount.

The most effective, and personalized AI systems for learning will connect to and pull from your company’s internal and external knowledge base to ensure learning content is not only comprehensive, but also specific to your business. But, for this to be effective, you need to get your house in order because, as the saying goes, garbage in, garbage out.

A re-emphasis on Knowledge Management

Most L&D professionals already understand the vital importance of knowledge management—yet, it may sometimes feel like an uphill battle to build cross-functional alignment across your business around what is essentially a company-wide project.

The output of any AI model is only as good as the quality of the data. And when the data being referenced by RAG and your AI solution is your company’s internal and external knowledge base, knowledge management becomes mission critical for everyone in the organization.

The referenced knowledge base serves as the backbone for fact-checking and validation, allowing the model to generate accurate and reliable information. If this knowledge base (your organization’s knowledge) is old, incomplete, or inaccurate, the RAG process can lead to the dissemination of incorrect or outdated information.

If you’re investing in an AI product that will reference your knowledge base, knowledge management will be critical for ensuring the quality and relevance of the output.

Consider building (or rebuilding) a cross-functional team around knowledge management in your business. Your team should involve the following groups, and any others unique to your business who are critical in defining truths:

  • Product
  • Product Marketing
  • Legal
  • Enablement
  • Knowledge Management / Help Desk team
03

Risk mitigation through pedagogy

Say, for example, that you adopt an AI for learning whose model has been trained on the majority of the internet and the free (and maybe some premium) courses available out there. You’ve got to ask yourself a few uncomfortable questions:

  • Are these courses even good, or pedagogically sound?
  • Does the training data and patterns feeding the AI represent the bar I set for my learning programs?
  • Can you ever know whether a course is effective without also seeing how learners performed and retained afterwards?

When it comes to AI for learning, GenAI’s greatest strength can also represent one of its greatest vulnerabilities. When a model relies on the patterns present in its training data, it can’t make context-rich decisions that disagree with those patterns. A library of learning theories does not make one a practitioner of learning, or a master facilitator.

Without also understanding specific learner outcomes after a training program, Generative AI’s pattern-based understanding of how to create learning content relies only on whether a course is theoretically effective, and most L&D professionals know how quickly a learning program can fall off track when the learner is not central to its design.

When pedagogy meets AI

A built-in pedagogical model or framework is the linchpin of effective learning AI. It bridges the gap between AI’s technological capabilities and educational effectiveness by aligning the AI model with established educational theories and practice. It provides the foundational principles and frameworks necessary for creating effective, engaging, and impactful educational experiences. Which is important, because the more impactful the learning experience, the more impact learning will have on your business. (Because, as we all know, learning doesn’t just drive personal growth—it also drives business growth.)

Embedding pedagogical strategies can enhance the quality of the learning experience by providing tailored, goal-aligned, engaging, measurable, and personalized learning content and guide the AI processes to ensure learning objectives are met efficiently and effectively.

04

Human-in-the-loop (HILT) systems

There are likely a number of low-stakes tasks that we should fully hand over to artificial intelligence, and never look back. We’ve always done this with technology. (Nobody manually operates the city’s telephone switchboard anymore, do they?)

However, there continue to be a number of significant tasks and functions that will always require a human in the loop (HILT). If we look outside of learning to something as fundamentally important as food, we see that AI is already present in agriculture. From weed detection to growth analysis to health monitoring, machine learning models are helping augment and improve the way that we grow food and feed our people. But that doesn’t mean that we don’t or won’t still need farmers to make these crucial decisions on critical systems. Learning is similarly critical, and regardless of how dramatically artificial intelligence improves, it will always require a human-in-the-loop system.

AI systems make predictions and decisions, but they rarely do it with 100% confidence or absolute certainty. In fact, the concept of absolute certainty is very human. That’s why HILT systems and processes are critical. We play a central role in handling the nuances and contextual factors that AI may not fully grasp, ultimately striking a balance between AI-driven insights and human wisdom.

The most powerful AI solutions won’t eliminate the need for human interaction. They’ll enable it at every stage so we can validate and oversee the output (in the broader context in which the AI model is operating), provide feedback and directions, and apply ethical and moral judgment to the decision.

Did you know

Andragogy or Pedagogy? Andragogy refers to the best practices and research-backed methods for teaching adult learners, and pedagogy typically refers to the best practices and research-backed methods for teaching children. However, pedagogy is often used in a broad context to describe methods for both adults and children alike (eg. colleges and universities often use pedagogy over andragogy). When we use the term pedagogy, it’s in the broader sense and acknowledges that our learners are adults.

From human processes to AI frameworks

Whether you work with graphics, authoring, or instructional design, there’s a creative process behind your craft. These processes are repeatable, situational, and foundational to the types of artifacts that you create. They can also be replicated and trained to an AI to help improve your workflow, giving you time back to focus on bigger picture strategic decision making. Less toil, more productivity.

The diagram below examines one way to translate some of the strategic decisions that an instructional designer makes while writing multiple-choice questions (MCQs) into a framework that AI can follow to deliver similar or improved results in a fraction of the time.

While writing MCQs, there are a lot of different factors that instructional designers take into account that affect the quality of assessment.

Using Docebo’s methodology, we capture all these factors and use them as input variables to generate an output that embodies all these otherwise human decisions. Note that the sequence of decisions is also in line with the order in the human authoring process and the dependencies of each decision. For example, when writing a reading comprehension activity, human authors would write a reference text first and then write the stem (question), and finally the options. This is exactly the order of how the prompt chains will be executed in the generation process.

Today’s AI for learning still needs to be coached by the crafter. As such, the most effective learning products will have human methodologies (like the one illustrated above) behind them to allow them to think and make decisions like an instructional designer.

10

Next steps

Now that you know more about AI for learning, you’re ready to start your search. But, before you begin, we suggest you answer these six questions to help guide you and your team as you evaluate potential vendors and shop for AI learning products.

6 big questions to answer before engaging with vendors

  1. What’s driving your buying decision? Is this product acquisition based on solving an existing problem, or do you have leftover budget and the demo was shiny?
  2. How will this impact the user experience? Does this product build on existing user behavior, or does it expect users to learn new skills? (And what does this mean for onboarding and time-to-value?)
  3. Who is in control of data and process? Who decides what data is relevant to the AI model? Where will your data be stored and processed?
  4. How will you avoid the black box problem? Can you inspect, explain, or override the AI model’s decisions, if needed?
  5. Is learning central to the AI model? Will pattern recognition drive the creation of learning materials or are pedagogically effective strategies built in to guide the AI processes?
  6. How will you manage this product? Artificial Intelligence (AI) products rely on the quality of its inputs. How will you ensure that the content it consumes is real, relevant and reliable?

11

About Docebo AI

We believe that artificial intelligence provides one of the most powerful opportunities for innovation in learning and development. As leaders in this space, we don’t follow trends, we create them. Docebo’s learning platform is powered by artificial intelligence, with several AI features included throughout the system to enhance the way businesses and enterprises deploy, manage, and scale their learning programs. 

Here are a few guiding principles for how we think about AI in our products and services, and what you can expect from Docebo AI. 

A pedagogy-first approach: When it comes to learning and development, an AI framework that doesn’t include pedagogy and established best practices in teaching and learning just won’t cut it. Our AI product team includes pedagogical experts who use a pedagogy-first approach to designing our AI solutions to maximize learner outcomes while minimizing hallucinations and inaccurate results.

Continuous assessment: We’re building a flywheel of robust continuous learning and assessment to help inform our AI models. When AI models rely solely on pattern recognition and replication, the results are not suitable for learning content. By understanding and leveraging learner performance through integrated assessment, the learning flow is never interrupted and a more comprehensive perspective is maintained. Docebo AI is connected to the bigger picture and provides the fundamental backbone for hyper-personalized learning. 

Reaching towards individualized learning: Hyper personalization and learning in the flow of work are inevitable outcomes of AI in learning. At Docebo, we’re focused on delivering this to our customers through smart implementation of safe and effective AI. 

Inspectable, Explainable, Overridable: An effective AI for learning solution never removes humans from the process and its outcomes. At Docebo, we design with inspectability, explainability and overridable at the heart of our products and services, empowering the user to be in control and have confidence in their work.

Freeing you up to focus on what matters: Docebo’s intent is to design AI learning solutions that offload much of the repetitive and time-consuming work, so learning designers can focus on what really matters, like data interpretation, content governance, strategic decision making, and holistic learning design.

Giving you control: Organizations should be in control of how AI functions within their business. Docebo’s solutions provide businesses with control over their own data so they can decide how (and if) it is used. We also provide data anonymization on our own LLM to ensure data privacy and control.

Want to explore how a GenAI learning platform can transform your business?