Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21
At a glance
Powered by AI
The key takeaways are that the Kirkpatrick model provides a framework to evaluate training programs using four levels: reaction, learning, behavior, and results. Measuring outcomes at each level can help optimize training and demonstrate the business impact.

The Kirkpatrick model helps evaluate training programs by providing a framework to measure outcomes at different levels: reaction, learning, behavior, and results. Measuring at each level provides more comprehensive data to understand what worked well and how training can be improved.

The four levels of the Kirkpatrick model are: 1) Reaction, 2) Learning, 3) Behavior, and 4) Results. Reaction measures satisfaction, Learning measures the increase in knowledge/skills, Behavior measures application of learning on the job, and Results measures the business impact of the training such as increased productivity or reduced costs.

Unpacking Kirkpatrick

DATA-DRIVEN DECISION MAKING IN L&D


CONTENTS
Unpacking Kirkpatrick: Data-Driven Decision Making in L&D.....................1
The Kirkpatrick Model......................................................................................3
Working Backwards.........................................................................................4
When to Use Kirkpatrick..................................................................................5
Level 1: Reaction..............................................................................................6
Level 2: Measuring Learning...........................................................................9
Level 3: Behavior Change..............................................................................11
Level 4: Measuring the Results.....................................................................13
Putting Kirkpatrick to Practice.......................................................................15
Calculating ROI................................................................................................16
Outcome Flowchart........................................................................................18
Recommended Resources.............................................................................19
Unpacking Kirkpatrick: Data-Driven Decision Making in L&D
When it comes to building an employee development plan, a lot of time and attention goes into the prep work:
program managers identify skill gaps, develop a training strategy, deliver training, reinforce training, and then the
cycle starts again.

Inevitably, someone in a leadership position will ask to see how the investment into employee development is
paying off, and unless you’ve taken an intentional approach to measuring the outcomes of your training efforts,
you’ll struggle to paint an accurate portrait of the impact your program has on your organization!

If your organization is acquired or merges with another company, you will absolutely need real statistics that prove
the value of your program. Some stats, like training usage, indicate strong program health, but that doesn’t mean
those statistics are evidence of meaningful business impact.

When you can demonstrate to leadership that your training efforts have reduced turnover by 13%, and saved the
company $40,000 in recruiting costs, your training program is in a strong position to position itself as contributing
to true business results.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 1


Getting to those specific numbers can be challenging, and training is sort of nebulous, which means that
rarely are hard numbers easily calculated.

Fortunately, a guy named Donald Kirkpatrick came around and modeled a solution to this dilemma, a training
measurement solution called the Kirkpatrick Model.

This model contains four levels: reaction, learning, behavior, and results.

We’re going to go into each of those levels more in depth, but at a high level, the model essentially asks you to
measure how the learners reacted to their training, how their attitude or skills trained, whether that change
was long-term, and whether that change led to a desired outcome.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 2


The Kirkpatrick Model
Measure your participants’ initial reaction to gain an understanding
REACTION of the training program and valuable insights into material quality,
educator, and more.

Measure how much information was effectively absorbed during the

LEARNING training and map it to the program or individual learning objectives.

Measure how much your training has influenced the behavior of the
BEHAVIOR participants and evaluate how they apply this information on the job.

RESULTS
Measure and analyze the impact your training has had at the business
level, and be sure to tie it to the individual program.

Keeping statistics that prove the value of your training program should become a part of your training process and following the Kirkpatrick Model makes
that possible!

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 3


Working Backwards
Training should never be done for the sake of training; rather training should always be tied to a
business goal. For this reason, when it comes to the Kirkpatrick Model, it’s recommended to start at the
end.

Level 4 asks trainers to measure and analyze the impact that your training has on your business’ desired
outcomes. Defining those desired outcomes must be step one before you assign any training.

We’ll use a sales rep as an example. Many people in this position spend hours-on-end dialing phone
numbers, talking to prospects, and qualifying them.

A few of sales reps are struggling to meet their goals, and leadership recognizes that your training
program and great content can develop their skills and help them meet quotas.

Unfortunately, a one-size-fits-all approach will rarely work here. Ask questions and identify why they’re
underperforming. If you’re not entirely sure what the necessary skills are, work with their managers to
identify skill gaps.

Find out if they need help finding motivation to make more calls, or if they struggle to gain a prospect’s
attention in a quick and timely manner. Whatever they need, identify a specific set of skills that will lead
to a desired business outcome.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 4


If you identify that your rep is struggling specifically with making enough dials a day, find out what is
hindering them. Maybe they struggle ending conversations that aren’t leading to productivity. Perhaps
they struggle entering information into your customer tracking software and need training on that
specific software.

If you train for those skills and see that they are now able to make more calls per day, and that has
improved your revenue growth, then you’ve succeeded! In the same vein, if there’s a discrepancy
between the actual result and your desired outcome, the Kirkpatrick Model can help you understand
why!

When to Use Kirkpatrick


This ebook is not a workbook. If you’re reading this in hopes that it will help you evaluate your last training
event, this book will probably not help unless you’ve already done steps 1-3.

Kirkpatrick should become a part of your training process. As you identify training needs and form a plan
to fill those needs, each step should become a part of your process.

For instance, level 1 calls for program managers to measure a learner’s reaction. As you decide training
delivery, you should also decide how you’ll measure reaction.

Kirkpatrick is part of building a learning strategy, so it requires pre-planning. Before you create your next
training event, read through levels 1-4, and you'll be in a strong position to record meaningful statistics for
your program!

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 5


Level 1: Reaction
The Kirkpatrick Evaluation Model first step should be planned before you deliver training, but first takes effect the moment training has been delivered. It’s
important to catch the reaction of your learner(s), and that’s where level 1 begins.

Recent research from Training Industry outlines a few facts that training program managers need to keep in mind as they deliver training:

• Different employees require different types of training


• The training topic affects what modality employees prefer
• Using multiple modalities (blended learning) makes you more likely to meet learner preferences
• Training delivered through a preferred modality is more effective

Understanding how learners want to learn will make your training efforts significantly more effective, because training is far more effective when it’s
delivered according to a learner’s preference.

Furthermore, knowing that training topics affect what modality the employees prefer means that seeking feedback is always necessary, even if you believe
you understand a learner’s preference.

What that means is that when you are training your customer support team on phone skills, they may prefer online learning, but that doesn’t mean they’ll
still prefer online learning when it becomes time to take training about a new product launch.

When you do inevitably fail to reach a desired outcome, returning to level 1 and evaluating whether training was well received will almost always illuminate
why there was a breakdown.

We know that the way we deliver training, or the modality, matters, and that makes it important to measure. The question now is “how?”

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 6


Here is where we begin data collection, starting with the oldest trick in the data-lover’s book: the survey.

Make sure that questions you ask will help you determine whether training was in-tune with your learners’ preferences.

Some questions that we recommend discovering answers to are:

• Was the training engaging?


• Did the training teach you something new?
• Did you like the style of this training?
• How would you change this training for future learners?
• Did you like the method of this training?
• Are there any resources that you think would help reinforce this training?

When it comes to surveys, the more specific feedback you get, the better. And make sure it’s written down!

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 7


Here are some examples of feedback, from bad, to better, to best:

BAD
Learners seemed to enjoy today’s training. Most feedback was positive.

BETTER
Learners seemed to enjoy online learning more than the classroom training we conducted as a follow up. One learner
said “I liked having the video option because I can go back and watch as many times as I need.”

BEST
Today’s classroom training on emotional intelligence was a follow-up to our week-long video training process. We asked
employees whether they thought that our blended approach was an improvement on our typical process, and 82% (41/50)
said that it was. We had a bit of negative feedback, many agreed that they didn’t like starting on a Thursday, because the
weekend interrupted the learning process. Starting on a Monday and ending on a Thursday may be a better approach in the
future. All the surveys said that learners did appreciate the shorter content spread over many days. We attribute the
preparation and engaging short-form content to a very engaging classroom session, where discussion flowed very naturally
and lots of questions were asked.

In other words, the more specific you are, the better positioned you are to identify why
training was successful or unsuccessful.

Level 1 is important because the most effective training needs to be delivered the right way, and when your training is unsuccessful, the
feedback you receive on level 1 of the Kirkpatrick Model will help you understand why.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 8


Level 2: Measuring Learning
Learners need to be taught according to their learning preferences, but training needs to be
effective for ROI to exist.

If you showed your learners Disney’s The Lion King, they may tell you they enjoyed their time.
But if your goal is to increase productivity on a factory line, the Disney classic will prove useless
to achieving results. Measuring learning follows immediately after the event, and level 2 will help
you evaluate how learners perceive your training.

If you’re investing time into creating your own training, or money into third-party content, it’s
vital to the success of your program that what you offer your learners actually works.

We’re left with a question: how do we measure the effectiveness of training?

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 9


Fortunately, there are plenty of tools available to measure the effectiveness of your training. The simplest and most common is to offer a quiz or test
following your training.

Testing not only measures how effective training is, it also reinforces training, because learners are asked to recall what they’ve learned. Research shows
that testing after a training event leads to higher retention over time, so test early and test often!

Testing also provides usable data about the effectiveness of your program. Asking ten-to-twenty questions that measure a learner’s recall gives you an
easy-to-calculate percentage of how much information your learners are retaining.

At BizLibrary, our videos come with quizzes that require 80% accuracy to complete. That’s a great benchmark to look for in your training. If your learners can
average 80% or better on tests, it’s great evidence that your training program is doing a good job of teaching your learners.

Poor performance may be disappointing, but it’s constructive feedback indicating your training needs improvement. If you believe this is the case, be sure
to view our free ebook “Off-the-Shelf Content: Your Secret to Optimizing Employee Training.”

Testing isn’t the only way to measure the effectiveness of training. Some
professionals advocate hands-on assignments, and that’s a great way to
evaluate how a learner might perform following a training event.

Regardless of how you measure the effectiveness of a given training event, make sure your data
is reliable and recorded somewhere.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 10


Level 3: Behavior Change
We mentioned in the introduction that training should be tied to a business goal. In this phase, we measure behavior changes,
and in the next phase, we determine whether that behavior change led to the business outcome that we hoped it would.

Measuring behavior change requires you to track behaviors and collect data before training, then compare after.
Let’s use the example of a sales team. Leadership wants to see your company’s revenue grow, and after talking to sales reps and
sales managers, you and the managers believe that the the biggest obstacle to your sales growth is that sales reps aren’t asking
for the sale.

You develop and execute a strategy, and learners both respond positively and test well.

At this point, you can identify whether a behavior change has occurred. If you can see that sales reps are actively asking for the
sale, you’ve created meaningful behavior change.

Just because your behavior change is what you set out to accomplish doesn’t mean that your training led to your desired
business outcome.

If you’re closing more deals, but revenue growth isn’t occurring, it may indicate that you misidentified the cause of your
stagnant revenue. Perhaps low client retention is the cause of your slowed revenue growth, and closing more deals hasn’t
closed the gap.

With data, it’s easy to be misled, or to focus on the wrong data set, so consider several attributions when analyzing data. If you
haven’t scrutinized your final data, someone else will, and you will be blindsided.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 11


Attributing Your Data Sets
If your company installs pools for customers, and you begin your training program in March and
train through June, you’re almost guaranteed to see a sales increase following your training; but that
doesn’t mean your training led to that sales increase. It’s harder to sell a pool in the fall than it is in the
summer, so timing must be a part of your attribution. If you sell a product that has a “busy season,”
consider comparing your data year over year rather than month over month.

You know your company best,


so keep in mind several factors when you conduct level 3 evaluation.

At this point, you may be noticing that even before we calculate the final-outcome statistics behind
training, following the Kirkpatrick Model will almost instantly improve the way you deliver training.
By breaking the process into stages, you get a clear picture of which stages are working and which stages
need improvement in the process.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 12


Level 4: Measuring the Results
It’s at this stage that you’ll finally discover measurable numbers that your training results have had on your
company’s business outcomes.

You started by identifying this outcome, so it shouldn’t be difficult


for you todetermine what outcome you’re actually measuring.

Let’s use employee turnover as an example. After talking to leadership, management, and employees, and after
analyzing exit interviews, you believe that your high turnover can be attributed to struggling middle
management.

Specifically, data that you’ve analyzed lead you to believe that your managers don’t always provide meaningful
feedback, and sometimes fail to delegate effectively, creating stress during projects.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 13


In response, you create a management bootcamp where all managers take courses and are given 1-on-1 attention to improve
their feedback and project management with an emphasis on delegation. Tests indicate that your training is effective, and
after a few months, employee feedback has transformed.

Employees respond that their managers provide meaningful feedback, and project management is a much better experience
for all employees.

This may seem like enough data to report back to leadership, but leaders don’t care if managers are providing meaningful
feedback unless that meaningful feedback leads to a decrease in turnover, which is what we set out to reduce in the first
place.

Telling your leaders that turnover was costing your organization $80,000 every quarter, and you were able to cut that cost
down by $25,000 each quarter through training is convincing, meaningful, and measurable with The Kirkpatrick Model!

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 14


Putting Kirkpatrick to Practice
Now that you have a clear understanding of the four levels of training evaluation, it’s a good time to put your
new knowledge into practice. The great thing about Kirkpatrick is that it’s easy to understand and proven to
work. Here are some things to keep in mind that will improve the reliability, accuracy, and ease of your
statistical calculations:

Important Things to Keep in Mind


There may be multiple attributions that factor into an outcome.
Behavior change is not the same thing as a desired outcome.
Use data to make decisions. This is most easily completed by conducting surveys, and interviewing employees
and managers to identify training needs.
Before designing how you will improve an outcome, find a baseline for what that outcome currently is actually
happeing. This will help you know whether or not your efforts led to an improvement.
Testing is an important part of learning evaluation.

When there is a discrepancy between your desired outcomes and your achieved outcomes, work backwards to discover what that might mean.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 15


Calculating ROI
If your training is tied to a business challenge, there should be a number involved at some
point. If your goal is to increase productivity at a factory, you will have a benchmark KPI to
compare against.

Ultimately, the best stat to produce to leadership is ROI. We know you know what it is, but
just so we’re all on the same page, we’ll write it as a formula here...

RETURN
ROI
(BENEFIT)

INVESTMENT
(COST)

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 16


Examples of Calculating an ROI
In this example, you work in a water bottle factory and your employees produce 500 water bottles every hour. If after 10 hours in
training on lean manufacturing, your employees are able to increase productivity by 10%, and you now make 550 water bottles every
hour, you have a return.

Let’s first calculate return.

In this scenario, 50 more water bottles every hour increases hourly profits in your factory by $8.00 /hour. If you work 8 hour days, your
factory now generates an extra $64/day. Standardized to a year comprised of 52 five-day, weeks, your company has experienced a
return of $16,640.

There was a cost to training, which is primarily found in the cost of training plus the cost of the time.

Let’s say you spent $2,000 in time and resources building training, and over ten hours, you lost 5,000 water bottles.
If each water bottle represents $0.16 in profit, the ten hours costed your factory about $8,000.

The investment in training costed your company a total of $10,000.

Now we have both return - $16,640 – and investment, $10,000. We can now plug those numbers into an ROI equation!
In this scenario, you’ve created a 66.40% return! You can now confidently claim that after training, your factory’s profitability increased
by 66.4% that year.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 17


Here is an easy flow chart to help you evaluate failure in your evaluation!

START
Did you achieve your desired outcome?

YES NO
Great work! Time to calculate your ROI. Did your employees behavior change?

NO YES
Did effectiveness measurements like testing If behavior changed but desired
indicate that training was effective. outcome did not, your
hypothesis was failed! That’s
okay, make a new one!

YES NO
Was the content you trained on Re-evaluate your training, find
relevant to the behavior change NO out what worked and what didn’t
you want to see? and try again!

YES
Was the feedback you gathered positive or
mostly positive?

YES NO
Make sure that your training is reinforced Work to seek feedback and determine learner preferences, and build a plan
through boosts, testing, and repetition. around finding training that works for your workforce!

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 18


Through accurate data collection, evaluation, and objective analysis, you can prove the value of your training program, earn new champions to help fight for
training in your organization, and provide a meaningful and statistically provable difference in your organization!

The Kirkpatrick Model is easy to use, provides clear and reliable results, and can transform the conversation around your training program!

Recommended Resources
Using data to transform your program isn’t easy, and a lot of legwork needs to be done before your program can mature to this stage.
If you’re not quite there, but excited to get there, check out these resources to help you accelerate the growth of your training program!

VIEW EBOOK VIEW WEBINAR VIEW INFOGRAPHIC VIEW INFOGRAPHIC

BizLibrary is a leading provider of online learning for growing organizations. Our award-winning microlearning video library engages employees of all levels,
and our learning technology platform is a progressive catalyst for achievement. Partnered with our expert Client Success and Technical Support teams,
clients are empowered to solve business challenges and impact change within their organizations. To learn more, visit www.bizlibrary.com.

Unpacking Kirkpatrick: Data-Driven Decision Making in L&D 19

You might also like