How To Conduct A Learning Audit: by Will Thalheimer, PHD
How To Conduct A Learning Audit: by Will Thalheimer, PHD
How To Conduct A Learning Audit: by Will Thalheimer, PHD
Introduction
I’ve been conducting learning audits for over a decade. Over the years, I’ve learned a
ton about what works—and what doesn’t. I’ve had a number of great successes and I’ve
made some substantial mistakes as well. I’ve audited many forms of learning, including
classroom training, elearning, and on-the-job learning. I’ve worked with many types of
organizations, including huge multinationals, small elearning shops, trade associations,
foundations, and universities. In this short report, I will share my “lessons learned” and
provide you with recommendations on how you can audit your own learning
interventions.
Why do all the best writers use editors? Why does software development require such
exhaustive quality-control reviews? Why do all skyscraper projects require extensive
engineering oversight?
Human learning is very complex—infinitely more complex than rocket science. That is
why it’s critical that we support our learning-development efforts with periodic
systematic reviews. The need is made greater by the fact that many learning programs
have substantial deficiencies in terms of learning effectiveness.
2
Will Thalheimer, PhD
Learning programs can be deficient for a number of reasons. Here’s a short list:
By doing a learning audit, deficiencies will be uncovered that can be targeted for
improvement. Sometimes these improvements can be made by the learning-design
team itself. Other times the learning audit gives us the ammunition to convince
stakeholders of the possibility and importance of making learning-design improvements.
3
Will Thalheimer, PhD
You wouldn’t evaluate the engineering integrity of a modern skyscraper using criteria
developed by a high-school study group, nor would you use a 1947 nurses manual to set
guidelines for today’s sophisticated nursing tasks. It’s the same with a learning audit.
The key is to start with a valid set of standards.
While it might be tempting to rely on our “common sense” in developing standards, too
often our common sense in the learning field leads us astray. Until recently, all the
following were deemed to be simple common sense in the learning field. Yet, for each of
the following, common sense was shown to be dead wrong:
Fortunately, over the past several decades, learning researchers have codified an array
of factors that enable learning to be effective. These can be found in the Decisive Dozen,
in the principles set out in the Serious eLearning Manifesto, and in books like Make It
Stick: The Science of Successful Learning. While research, too, can have blind spots, it
gives us our best benchmark—as long as it is compiled with practical wisdom.
Here are just a few of the many things our learning audits ought to assess:
1
(Pashler, McDaniel, Rohrer, & Bjork, 2008).
2
(Thalheimer, 2008a, 2008b).
3
(Kirschner & van Merriënboer, 2013).
4
(Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008).
5
(Carpenter, Cepeda, Rohrer, Kang, & Pashler, 2012).
6
(Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012).
4
Will Thalheimer, PhD
Learning audits can target the inputs and/or outputs of a learning program. The inputs
are the factors baked into a learning program’s design and delivery. The outputs are the
results obtained from deploying the learning program to learners. The following chart
lists some common inputs and outputs:
The most comprehensive learning audits will examine both inputs and outputs. For
inputs, a full learning audit will research-benchmark the program, comparing its
instructional design approaches to research-based best practices. In addition to auditing
fully-developed programs, we can also audit designs and prototypes.
For outputs, the most comprehensive learning audits will examine (1) comprehension
gains due to the learning program, (2) decision-making competence, (3) level of
remembering, (4) amount of actual performance improvement, (5) the strengths and
weaknesses of the learning-measurement system, and (6) the learning program’s fit
with the organization’s business models.
Of course, such fully comprehensive audits are costly and time consuming. More
importantly, depending on your goals for your audit, it is very unlikely that you will need
to be so comprehensive. Indeed, I have never been asked to conduct a learning audit
comprised of all the assessments that might be done.
The advantage to research benchmarking on the input side of the ledger is that the
focus is on design elements that are modifiable and which can be targeted for
improvement. The advantage to doing an evaluation study on the outputs—if these
evaluations are well done—is that we are able to examine the learning program’s actual
results. Ideally, an audit that examines both inputs and outputs will connect the dots
between the output results and the input design elements. For example, the audit might
point out that the after-training decline in decision-making competence (as measured at
the one-month mark) is probably due to the learning program’s lack of support for
remembering—or, more specifically, that the program didn’t provide enough realistic
decision-making practice spaced over time. Obviously, there are many other ways to
connect the dots.
5
Will Thalheimer, PhD
Learning audits will be more successful if they are conducted using a proven process.
Some variant of the following process is likely to enable success:
1. Solidify Sponsorship
Find a sufficiently powerful decision maker to sponsor the effort. Without a
sufficiently powerful person—a person who can make decisions, rally political
support, provide time and resources—no change effort is likely to be successful.
Learning audits are change efforts, or they are not worth the investment!
2. Enlist Stakeholders
Enlist a sufficient number of key stakeholders to agree to or accept the audit
process. While it is desirable to get the full support of these key stakeholders—
instead of just their acquiescence—a well-conducted audit review meeting can
subsequently gain their support.
6
Will Thalheimer, PhD
While these level-setting events may seem superfluous, they are absolutely
critical to ensure that everyone is on the same page. Although they do take
extra time, they enable much more productive discussions when the audit
findings and recommendations are being discussed, and when solution
brainstorming is being conducted.
Many years ago, I tried to share audit findings without first providing these
educational sessions. The results were disastrous! My clients couldn’t
understand where my recommendations were coming from. They disagreed
with each other about how learning worked. Our discussions got hijacked by
concerns over minimally-important learning factors when we should have been
focusing on the most important learning factors. In short, my early audits failed
to communicate with clarity. They created situations where my audit
participants were all over the place in their conclusions, making solution
brainstorming and prioritization a most difficult endeavor.
These educational sessions are critical for three primary reasons. First, the
research-based standards you use may not be fully understood by all your
stakeholders. As we’ve already seen, some of the recent learning research is not
intuitive. Also, you may have stakeholders who are not immersed in learning as
their profession. Second, because the learning field does not always have a clear
and consistent body of knowledge, practitioners in the field often approach
learning from widely disparate perspectives. By providing educational sessions,
you can heighten attention toward the standards you are using. Finally, some
learning factors are more important than others. It helps to get everyone
focused on factors that are of the highest importance rather than factors that
are less important.
7
Will Thalheimer, PhD
I hope I’m making a strong case here for the importance of these educational
sessions. Let me add one final thing to help persuade you. These educational
sessions are so important that I will NOT do a learning audit if a client doesn’t
agree to include them!
Now that you’re persuaded, there are a few other things you absolutely must
know about how to prepare for these educational events. First, you have to give
people more than just recipes or rules. You have to provide them with a deep
understanding of human learning. You want them to be able to think for
themselves about learning—to flexibly and creatively apply learning knowledge
in making instructional-design decisions. Second, to support the first goal,
you’ve got to give them time to think through the factors you present—
preferably by having them work through realistic instruction-design decisions—
reflect on their own learning approaches and learn from a breadth of different
perspectives.
8
Will Thalheimer, PhD
9
Will Thalheimer, PhD
B. Individual Brainstorming
Begin by having each participant individually write down a list of
solution ideas. Alternatively, you (as the auditor) can prepopulate
specific solution ideas from your list of recommendations and ask the
participants to add to the list. Also, you can offer a set of categories and
ask the participants to generate solution ideas within each of those
categories. The key here is that you may need to support the
participants in thinking through all the possibilities that might be
available. You want to avoid having them go down one or two paths and
forget other important solution opportunities. On the other hand, you
don’t want to be so structured that your participants aren’t asked to
make substantive contributions.
C. Group Brainstorming
Only after individual brainstorming should you initiate group
brainstorming. Research shows that individual brainstorming generates
more creative ideas and ideas along more varied dimensions—so it’s
critical to do individual brainstorming first.
D. Categorizing
Because you’re going to get redundant ideas and ideas that overlap, it’s
helpful to get the participants to group the ideas into conceptual
categories. This can easily be accomplished by placing all the ideas on
individual stickies, placing all the stickies on a wall, and having
participants as a group move the stickies around to form clusters of
ideas. It can be helpful toward the end—when the clusters have been
more-or-less finalized—to add labels and connecting lines to the various
clusters to help make sense of them.
E. Reflection
It’s really helpful at this point to have the group sit down again and
spend some time reflecting on their initial clustering scheme.
10
Will Thalheimer, PhD
11
Will Thalheimer, PhD
There are many ways to gather data on learning audits. Here is a brief list to give you a
sense of the variety of methods available to you.
This list could go on and on, but that makes it more difficult—not less. The key is
deciding where the most value comes from. This will depend partly on the goals you
have for your learning, your constraints, and a host of other factors—but there are
some generalizable truths here.
12
Will Thalheimer, PhD
Research Benchmarking
The most important thing you can do—in terms of data gathering—is to research-
benchmark your learning program. Research benchmarking is at the top end of potency
because (1) it targets learning-design issues that really make a difference in learning
effectiveness; (2) it targets learning-design issues that are under the control of us as
learning professionals; and (3) it is based on research-validated learning factors.
One statement like this may not register, but, after about five like this, organizational
decision-makers sit up and pay attention.
Of course, given the subjective nature of these responses, they must be augmented with
other data and viewed based on research-based perspectives on learning.
One of the key goals in learning is to help learners associate situational cues with
desired actions.
Situation – Action
Here are some examples. We want our supervisor learners to know which situations
require them to seek input from their direct reports. We want our sales-person learners
13
Will Thalheimer, PhD
to know which conversational response to make to each potential objection they hear
from their customers. We want our statistics students to know which statistic to use
given the research question at hand. If our learning designs don’t provide learners
practice in situation-action pairs, then our learning is not supporting performance.
If we’re auditing learning, one of the strongest ways to determine whether the learning
is providing sufficient practice is to measure our learners’ ability to make scenario-based
decisions. In our audit work, we can present learners with various situations and ask
them what actions should be taken. The key, of course, is that our scenario-based
questions have to be well conceived to provide the most critical realistic cues, and they
have to be set in backgrounds that are appropriate to learners’ future performance
situations. How to design well-crafted scenario-based questions is beyond the scope of
this report—but, suffice it to say that, while they are easier to develop than full-blown
simulations, designing scenario-based questions still requires substantial know-how.
Data Triangulation
Because all data gathering is probabilistic—that is, it is only approximating the reality on
the ground—it is best to use more than one source of data in a learning audit. While
research benchmarking can stand alone as an audit method, none of the other data
gathering approaches should be utilized on their own. Even research benchmarking
should be augmented when time and resources allow.
Having multiple data sources not only enables you to corroborate your findings; it also
helps you make a stronger case to your stakeholders.
14
Will Thalheimer, PhD
Given that the goal was performance improvement as supported by the whole learning
function, this learning audit looked not only at elearning programs, but also at the
whole learning ecosystem in the retail stores. eLearning programs were audited based
on a rubric of learning-research considerations. A classroom-based course for assistant
managers was observed over several days. In addition, structured interviews and focus
groups of different learner groups were conducted. In addition to learners, other key
stakeholder groups were interviewed, including senior leaders, key managers, and
learning professionals. Employees were observed in their jobs and some were job
shadowed. In addition, company communications were evaluated as learning tools,
including such artifacts as the company magazine, bulletin boards, and the company’s
website portal.
Among many findings, one of the most notable was that employees reported that they
learned from those people who they worked with the most. This finding had significant
implications. First, it showed that the way people interact with each other can
significantly impact learning—and that more attention might be paid to helping people
learn from each other. Second, where the organization had been sending itinerant
trainers around to the stores, such a strategy was not working as well as it might
because of the relatively few touch points between the trainers and employees.
The audit also found that on-the-job coaching was not always effective. Employees
tended to overuse “telling” as a coaching technique and failed to engage their coachees
in practice. As the research makes clear, real-world practice and feedback are especially
beneficial in supporting comprehension and remembering.
The audit also gave the organization a wake-up call on their elearning. Where previously
they had accepted low elearning ratings as routine, getting the research benchmarking
results and witnessing actual comments from disgruntled learners made the learning
team realize that significant changes were needed. Indeed, when the head of elearning
assisted in the focus groups, she was completely taken aback by the harsh criticism the
elearning received. To her credit—and the organization’s—this feedback propelled the
team to rethink its elearning designs.
15
Will Thalheimer, PhD
The Foundation’s previous practice was to get feedback from faculty members on the
program. They often did this by inviting faculty members to the Foundation’s offices for
meetings and by keeping in close contact with faculty members who were teaching with
their materials. They also surveyed faculty members periodically.
The learning audit was aimed at getting an overall evaluation of the new program. To
keep it consistent with the Foundation’s past learning-measurement practices, faculty
members using the new elearning/classroom hybrid course were surveyed using the
same questions asked of those who taught with the classroom-only materials. Faculty
members were also interviewed.
In addition to seeking faculty input, many other data gathering efforts were made. The
elearning and classroom portions of the course were research-benchmarked. Scenario-
based questions on the learning program’s key concepts were validated with about ten
subject-matter experts and then deployed using cloned items on pretests, immediate
after-learning tests, and delayed after-learning tests. Finally, student business plans
were evaluated by expert business-plan reviewers.
Among many findings, one of the most notable was that, while faculty members rated
the course materials very highly—consistent with the Foundation’s previous evaluations
of their classroom programs—learners rated the learning as mediocre. Even more
problematic was that learners did not improve from pretest to posttest and their
business plans were poorly rated.
The research benchmarking review found that the program did not sufficiently utilize
learning methods focused on realistic decision-making, tending to focus on providing
learners with a large amount of knowledge dissociated from business decision making.
16
Will Thalheimer, PhD
The learning auditor engaged the MOOC as a learner might, and augmented this by
doing activities more than once, attempting both usual and unusual actions—capturing
screenshots and providing commentary on each. A full audit process was not utilized.
Instead, the auditor was tasked with reviewing the MOOC based on his background
knowledge—which included a background in both learning research and elearning. After
generating an initial review, the auditor utilized the Decisive Dozen to double-check
whether he had missed any concerns. After adding some additional insights to his
review, the audit review—an annotated PowerPoint deck with screenshots and
commentary—was provided to the stakeholders for their review.
The audit revealed several things that could be improved based on the pilot. First, there
were some minor issues inherent in any pilot program: navigation that didn’t work quite
right, feedback that was misaligned to the wrong response, confusing content, etc.
Second, there were a few missing instructional-design elements that weren’t utilized
well enough, including the use of job aids, additional practice, and encouragement to try
this back on the job.
On the other hand, the program was led by a brilliant, highly-credible instructor—and,
most importantly, someone who understood the situation-action contingencies within
her area of expertise. She nicely used her situation-action knowledge to highlight best
practices and common mistakes to avoid. This “next-generation” MOOC went beyond
most MOOCs by presenting a very rich video-based case—with professional production
and acting. The case asked learners to make decisions, providing them with realistic
practice. The MOOC also engaged learners with well-designed social-media elements:
for example, responding with specific insights about decisions in the case.
17
Will Thalheimer, PhD
Classroom Training
A large government agency was concerned that its subject-matter experts were doing
an inadequate job training their field professionals. An outside auditor was called in to
analyze one of its courses. The auditor sat in the four-day course as a learner, examined
the learning materials, interviewed instructors, and spoke with students during breaks
and after the course.
The course was research-benchmarked against the Decisive Dozen. It was also
benchmarked against notions found in the Situation-Based Instructional-Design
Approach. Finally, the course was evaluated based on the Full-Source Learning-
Evaluation approach, which focuses on the learning measurement methods used to
provide feedback for course improvement. A written report was developed based on
the audit findings.
Among many other findings, the audit found that the course instructors were highly
credible and well organized. Unfortunately, too much material was presented and not
enough realistic practice was included in the course. While the instructors attempted to
increase “interactivity” through periodic quizzes focused on knowledge from the course,
these quizzes did not focus on real-world decision-making. The quizzes did have their
intended effect: they encouraged learners to pay attention to individual knowledge
morsels, with some learners even studying their notes overnight to do well on the
quizzes, which had been set up as a competition between teams. Unfortunately, while
this did reinforce knowledge acquisition, it hurt learners in thinking about overarching
themes and situational decision making.
18
Will Thalheimer, PhD
To get your organization up to speed on how to conduct learning audits, the following
process will be helpful.
1. Gather your key learning professionals together to discuss the audit idea.
2. Have them read this document and discuss thoroughly, raising both benefits
and obstacles inherent in learning auditing.
3. Get general agreement among your key stakeholders to explore the audit idea
further.
4. Have one or more of your organization’s learning programs audited by a trusted
outside learning auditor.
5. Discuss and decide whether to create an in-house learning-audit capability.
6. Select your initial team of in-house auditors.
7. Get your team trained in auditing, and particularly in research benchmarking.
8. Select one or two courses to pilot your team’s new skills. Utilize a trusted
outside learning auditor to coach your team through their first audit. Get
feedback and develop lessons learned.
9. Ramp up your new team’s capability and develop an auditing schedule for your
organization’s learning programs.
10. Periodically—perhaps every other year—have your in-house team audit a
learning program in parallel with an outside learning auditor. This will ensure
your team is continuing to produce the highest-quality audits and it will give any
new members of your audit team feedback from an expert auditor. Consider
using multiple outside learning auditors to get varied perspectives.
19
Will Thalheimer, PhD
20
Will Thalheimer, PhD
Final Thoughts
We humans are imperfect. Even when we do our best, we introduce flaws into our
work. When our work is complex, we need others to help us see what we can’t see.
This document was edited by a professional copyeditor, because it’s good to get an
expert review of one’s work.
For over a decade and a half, Work-Learning Research has been creating almost-
perfect research-to-practice reports and sharing research-based wisdom through
keynotes, conference presentations, workshops, blog posts, articles, and more—very
often offering free research-based information to the learning field.
You can make it possible for us to continue our work by hiring us and by letting others
know about the great work we do.
How to be in touch:
Phone: 888-579-9814 (United States)
Email: info@work-learning.com
Sign Up for News: http://www.work-learning.com/sign-up.html
Main Blog: http://willatworklearning.com
Twitter: @WillWorkLearn
LinkedIn: http://www.linkedin.com/in/willthalheimer
Websites:
Work-Learning Research
LearningAudit.com
SubscriptionLearning.com
AudienceResponseLearning.org
Willsbook.net
SmileSheets.com (in Development)
21
Will Thalheimer, PhD
In 1998, Dr. Thalheimer founded Work-Learning Research to bridge the gap between
research and practice, to compile research on learning, and to disseminate research
findings to help chief learning officers, instructional designers, trainers, elearning
developers, performance consultants, and learning executives build more effective
learning-and-performance interventions and environments. He is one of the authors
of the Serious eLearning Manifesto.
Will holds a BA from the Pennsylvania State University, an MBA from Drexel
University, and a PhD in Educational Psychology: Human Learning and Cognition
from Columbia University.
22
Will Thalheimer, PhD
Research
Alliger, G. M., Tannenbaum, S. I., Bennett, W. Jr., Traver, H., & Shotland, A. (1997). A
meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-
358.
Brown, P. C., Roediger, H. L. III, & McDaniel, M. A. (2014). Make it stick: The science of
successful learning. Cambridge, MA: Belknap Press of Harvard University Press.
Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using
spacing to enhance diverse forms of learning: Review of recent research and
implications for instruction. Educational Psychology Review, 24(3), 369-378.
Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban
legends in education. Educational Psychologist, 48(3), 169-183.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts
and evidence. Psychological Science in the Public Interest, 9, 105-119.
Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of
training and development in organizations: What matters in practice. Psychological
Science in the Public Interest, 13(2), 74-101.
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review
and meta-analysis of the nomological network of trainee reactions. Journal of Applied
Psychology, 93, 280-295.
23