Unpacking Kirkpatrick: Data-Driven Decision Making in L&D
Unpacking Kirkpatrick: Data-Driven Decision Making in L&D
Unpacking Kirkpatrick: Data-Driven Decision Making in L&D
Inevitably, someone in a leadership position will ask to see how the investment into employee development is
paying off, and unless you’ve taken an intentional approach to measuring the outcomes of your training efforts,
you’ll struggle to paint an accurate portrait of the impact your program has on your organization!
If your organization is acquired or merges with another company, you will absolutely need real statistics that prove
the value of your program. Some stats, like training usage, indicate strong program health, but that doesn’t mean
those statistics are evidence of meaningful business impact.
When you can demonstrate to leadership that your training efforts have reduced turnover by 13%, and saved the
company $40,000 in recruiting costs, your training program is in a strong position to position itself as contributing
to true business results.
Fortunately, a guy named Donald Kirkpatrick came around and modeled a solution to this dilemma, a training
measurement solution called the Kirkpatrick Model.
This model contains four levels: reaction, learning, behavior, and results.
We’re going to go into each of those levels more in depth, but at a high level, the model essentially asks you to
measure how the learners reacted to their training, how their attitude or skills trained, whether that change
was long-term, and whether that change led to a desired outcome.
Measure how much your training has influenced the behavior of the
BEHAVIOR participants and evaluate how they apply this information on the job.
RESULTS
Measure and analyze the impact your training has had at the business
level, and be sure to tie it to the individual program.
Keeping statistics that prove the value of your training program should become a part of your training process and following the Kirkpatrick Model makes
that possible!
Level 4 asks trainers to measure and analyze the impact that your training has on your business’ desired
outcomes. Defining those desired outcomes must be step one before you assign any training.
We’ll use a sales rep as an example. Many people in this position spend hours-on-end dialing phone
numbers, talking to prospects, and qualifying them.
A few of sales reps are struggling to meet their goals, and leadership recognizes that your training
program and great content can develop their skills and help them meet quotas.
Unfortunately, a one-size-fits-all approach will rarely work here. Ask questions and identify why they’re
underperforming. If you’re not entirely sure what the necessary skills are, work with their managers to
identify skill gaps.
Find out if they need help finding motivation to make more calls, or if they struggle to gain a prospect’s
attention in a quick and timely manner. Whatever they need, identify a specific set of skills that will lead
to a desired business outcome.
If you train for those skills and see that they are now able to make more calls per day, and that has
improved your revenue growth, then you’ve succeeded! In the same vein, if there’s a discrepancy
between the actual result and your desired outcome, the Kirkpatrick Model can help you understand
why!
Kirkpatrick should become a part of your training process. As you identify training needs and form a plan
to fill those needs, each step should become a part of your process.
For instance, level 1 calls for program managers to measure a learner’s reaction. As you decide training
delivery, you should also decide how you’ll measure reaction.
Kirkpatrick is part of building a learning strategy, so it requires pre-planning. Before you create your next
training event, read through levels 1-4, and you'll be in a strong position to record meaningful statistics for
your program!
Recent research from Training Industry outlines a few facts that training program managers need to keep in mind as they deliver training:
Understanding how learners want to learn will make your training efforts significantly more effective, because training is far more effective when it’s
delivered according to a learner’s preference.
Furthermore, knowing that training topics affect what modality the employees prefer means that seeking feedback is always necessary, even if you believe
you understand a learner’s preference.
What that means is that when you are training your customer support team on phone skills, they may prefer online learning, but that doesn’t mean they’ll
still prefer online learning when it becomes time to take training about a new product launch.
When you do inevitably fail to reach a desired outcome, returning to level 1 and evaluating whether training was well received will almost always illuminate
why there was a breakdown.
We know that the way we deliver training, or the modality, matters, and that makes it important to measure. The question now is “how?”
Make sure that questions you ask will help you determine whether training was in-tune with your learners’ preferences.
When it comes to surveys, the more specific feedback you get, the better. And make sure it’s written down!
BAD
Learners seemed to enjoy today’s training. Most feedback was positive.
BETTER
Learners seemed to enjoy online learning more than the classroom training we conducted as a follow up. One learner
said “I liked having the video option because I can go back and watch as many times as I need.”
BEST
Today’s classroom training on emotional intelligence was a follow-up to our week-long video training process. We asked
employees whether they thought that our blended approach was an improvement on our typical process, and 82% (41/50)
said that it was. We had a bit of negative feedback, many agreed that they didn’t like starting on a Thursday, because the
weekend interrupted the learning process. Starting on a Monday and ending on a Thursday may be a better approach in the
future. All the surveys said that learners did appreciate the shorter content spread over many days. We attribute the
preparation and engaging short-form content to a very engaging classroom session, where discussion flowed very naturally
and lots of questions were asked.
In other words, the more specific you are, the better positioned you are to identify why
training was successful or unsuccessful.
Level 1 is important because the most effective training needs to be delivered the right way, and when your training is unsuccessful, the
feedback you receive on level 1 of the Kirkpatrick Model will help you understand why.
If you showed your learners Disney’s The Lion King, they may tell you they enjoyed their time.
But if your goal is to increase productivity on a factory line, the Disney classic will prove useless
to achieving results. Measuring learning follows immediately after the event, and level 2 will help
you evaluate how learners perceive your training.
If you’re investing time into creating your own training, or money into third-party content, it’s
vital to the success of your program that what you offer your learners actually works.
Testing not only measures how effective training is, it also reinforces training, because learners are asked to recall what they’ve learned. Research shows
that testing after a training event leads to higher retention over time, so test early and test often!
Testing also provides usable data about the effectiveness of your program. Asking ten-to-twenty questions that measure a learner’s recall gives you an
easy-to-calculate percentage of how much information your learners are retaining.
At BizLibrary, our videos come with quizzes that require 80% accuracy to complete. That’s a great benchmark to look for in your training. If your learners can
average 80% or better on tests, it’s great evidence that your training program is doing a good job of teaching your learners.
Poor performance may be disappointing, but it’s constructive feedback indicating your training needs improvement. If you believe this is the case, be sure
to view our free ebook “Off-the-Shelf Content: Your Secret to Optimizing Employee Training.”
Testing isn’t the only way to measure the effectiveness of training. Some
professionals advocate hands-on assignments, and that’s a great way to
evaluate how a learner might perform following a training event.
Regardless of how you measure the effectiveness of a given training event, make sure your data
is reliable and recorded somewhere.
Measuring behavior change requires you to track behaviors and collect data before training, then compare after.
Let’s use the example of a sales team. Leadership wants to see your company’s revenue grow, and after talking to sales reps and
sales managers, you and the managers believe that the the biggest obstacle to your sales growth is that sales reps aren’t asking
for the sale.
You develop and execute a strategy, and learners both respond positively and test well.
At this point, you can identify whether a behavior change has occurred. If you can see that sales reps are actively asking for the
sale, you’ve created meaningful behavior change.
Just because your behavior change is what you set out to accomplish doesn’t mean that your training led to your desired
business outcome.
If you’re closing more deals, but revenue growth isn’t occurring, it may indicate that you misidentified the cause of your
stagnant revenue. Perhaps low client retention is the cause of your slowed revenue growth, and closing more deals hasn’t
closed the gap.
With data, it’s easy to be misled, or to focus on the wrong data set, so consider several attributions when analyzing data. If you
haven’t scrutinized your final data, someone else will, and you will be blindsided.
At this point, you may be noticing that even before we calculate the final-outcome statistics behind
training, following the Kirkpatrick Model will almost instantly improve the way you deliver training.
By breaking the process into stages, you get a clear picture of which stages are working and which stages
need improvement in the process.
Let’s use employee turnover as an example. After talking to leadership, management, and employees, and after
analyzing exit interviews, you believe that your high turnover can be attributed to struggling middle
management.
Specifically, data that you’ve analyzed lead you to believe that your managers don’t always provide meaningful
feedback, and sometimes fail to delegate effectively, creating stress during projects.
Employees respond that their managers provide meaningful feedback, and project management is a much better experience
for all employees.
This may seem like enough data to report back to leadership, but leaders don’t care if managers are providing meaningful
feedback unless that meaningful feedback leads to a decrease in turnover, which is what we set out to reduce in the first
place.
Telling your leaders that turnover was costing your organization $80,000 every quarter, and you were able to cut that cost
down by $25,000 each quarter through training is convincing, meaningful, and measurable with The Kirkpatrick Model!
When there is a discrepancy between your desired outcomes and your achieved outcomes, work backwards to discover what that might mean.
Ultimately, the best stat to produce to leadership is ROI. We know you know what it is, but
just so we’re all on the same page, we’ll write it as a formula here...
RETURN
ROI
(BENEFIT)
INVESTMENT
(COST)
In this scenario, 50 more water bottles every hour increases hourly profits in your factory by $8.00 /hour. If you work 8 hour days, your
factory now generates an extra $64/day. Standardized to a year comprised of 52 five-day, weeks, your company has experienced a
return of $16,640.
There was a cost to training, which is primarily found in the cost of training plus the cost of the time.
Let’s say you spent $2,000 in time and resources building training, and over ten hours, you lost 5,000 water bottles.
If each water bottle represents $0.16 in profit, the ten hours costed your factory about $8,000.
Now we have both return - $16,640 – and investment, $10,000. We can now plug those numbers into an ROI equation!
In this scenario, you’ve created a 66.40% return! You can now confidently claim that after training, your factory’s profitability increased
by 66.4% that year.
START
Did you achieve your desired outcome?
YES NO
Great work! Time to calculate your ROI. Did your employees behavior change?
NO YES
Did effectiveness measurements like testing If behavior changed but desired
indicate that training was effective. outcome did not, your
hypothesis was failed! That’s
okay, make a new one!
YES NO
Was the content you trained on Re-evaluate your training, find
relevant to the behavior change NO out what worked and what didn’t
you want to see? and try again!
YES
Was the feedback you gathered positive or
mostly positive?
YES NO
Make sure that your training is reinforced Work to seek feedback and determine learner preferences, and build a plan
through boosts, testing, and repetition. around finding training that works for your workforce!
The Kirkpatrick Model is easy to use, provides clear and reliable results, and can transform the conversation around your training program!
Recommended Resources
Using data to transform your program isn’t easy, and a lot of legwork needs to be done before your program can mature to this stage.
If you’re not quite there, but excited to get there, check out these resources to help you accelerate the growth of your training program!
BizLibrary is a leading provider of online learning for growing organizations. Our award-winning microlearning video library engages employees of all levels,
and our learning technology platform is a progressive catalyst for achievement. Partnered with our expert Client Success and Technical Support teams,
clients are empowered to solve business challenges and impact change within their organizations. To learn more, visit www.bizlibrary.com.