Wednesday, May 11, 2011

How to Evaluate a Training Program


Since the dawn of time, when early trainers were training their clan members how to improve their hunting and gathering skills, training organizations have struggled with how to measure the impact of their training programs.

Since then, thanks to pioneers on the training field like Donald Kirkpatrick and Jack Phillips, we now understand that training evaluation needs to be more than administering “smile sheets” (“did you like the food?”) after a program.

Ask any training professional at a training conference how to measure training, and most of them will be able to recite the industry standard “Kirkpatrick Four Step Model”. Kirkpatrick taught us that we should measure:

1. Reaction. These are the old “smile sheet” questions, i.e., “on a scale of 1-10, please rate the instructors, materials, food, pre-work, etc….”. These kind of questions are still important – they are a measure of client satisfaction. Let’s face it, a hot, noisy room can kill even the best training program. Instructors love getting these too, because most instructors have huge egos and want to read about how wonderful they are, and a few of them even use them to make improvements.

2. Learning. We need to measure if the participants learned anything. Learning could be knowledge or skills.

3. Transfer. Did they participants actually behave differently back on the job?

4. Results. Did the training have an impact on business results, i.e., improved customer satisfaction, increased revenues, reduced cots, etc…

However…… if you ask these same trainers how much of this stuff they have actually implemented in their organizations, that’s when someone shifts the discussion to whether leaders are born or made.

Why not? Why the gap between knowing and doing? I believe there are two main reasons why organizations are not implementing this model:

1. It’s hard to. It takes a lot of effort, time, and resources to administer all this stuff. You’d pretty much need a full time person or department. Most training teams, when faced with the choice of using resources to measure training vs. actually doing training would rather do.

2. You can get away with not doing it. Most executives don’t ask for it, and if they do, it’s probably too late to do anything about it anyway. Training evaluation is one of those things that help you win training awards, but it’s not top of mind for most line managers.

I happen to believe it’s important to measure the impact of training….. but not that important. That is, let’s do it right, but not overdo it by spending a lot of money, time, and resources.

Here’s a relatively simple, yet effective system that I’ve seen work and that more and more companies are using:

1. First of all, trainers should not design or administer their own courses, in order to remove any conscience or unconscious bias. However, they should be doing ongoing “plus deltas” at the end of each course, perhaps even every day, especially for new courses.

2. All courses should be evaluated using a common platform, centrally administered (one person), with some questions being standard, so comparisons can be made. Either buy a software program, or create your own, using a tool like Survey Monkey or Zoomerang. Some Learning Management Systems have measurement build into their platforms as well.

3. Questions should address all four levels of evaluation: level 1: participant reaction; level 2: learning; level 3: transfer; level 4: business results (Kirkpatrick model)

4. For level 1, use the basic same questions for all courses (instructors, food, conference center, materials, etc…).

5. For levels 2 and 3, ask participants to estimate their estimated increase in the knowledge or skill the course was designed to improve (i.e., 10%, 20%, etc..). I know, I know, it’s not the same as a test, but for higher level skills like “ability to see the big picture”, it’s a reasonable measure.

6. For level 4, identify a list of key business results (i.e., speed to market, cost reduction, client satisfaction, etc…). For each course, pick out the business results the course was designed to address and ask participants to project how much attending the course will enable them to influence each business results.

7. Administer these questions to participants immediately after the course (online).

8. Administer the same survey to participants 90 days later, but instead of asking them to project their learning, behavior change and business result impact, ask them to look back and estimate actual results.

9. Administer the same 90 day survey to the participant’s managers, asking them to assess the trainings impact on their employee.

Although this may sound complicated, it’s really fairly simple to design and administer. With a good techie geek, you can use the data to produce some pretty slick training dashboard reports. With the level 4 questions, you can begin to show training’s impact on business results, something that’s historically been very hard to do, other than things like sales training. Of course, you would want to compare participant’s and manager’s estimates with actual business results. If the number don’t match up, that’s often an indication that training is not the problem, that there may be other factors coming into play.

Has anyone used a system like this, or something better? What do you think, is it worth the bother?

21 comments:

Helen Antholis said...

Dan,

I enjoyed reading about your survey ideas for measuring the effectiveness of training. Typically, I develop an assessment instrument based on behavioral and knowledge outcomes and administer it pre-training (it provides a framework for the learning to come and identifies perceptions of abilities on a 1 to 10 scale). Then I administer the same instrument post-training asking for two ratings. One is for their post view on their pre-view (i.e., what's your perception now of what you thought you knew/did) and the other is their post-view of their current perception (at the end of class). I compare the results anonymously to document perceived learning gain. Your ideas about 90-day later assessments administered to participants and their managers is superb for taking this forward to obtain transfer and business results data. Thank so much for this post. Found you on Twitter today and will follow you!

Helen
Performance Advantage Inc., NJ

Danny McCraine said...

My organization struggled for years to find ways to measure our effectiveness using Kirkpatrick and Phillips-style ROI methodologies, with out any luck. In many cases, the business unit we were supporting had no way to return the business metrics they tracked to the individual contributor, therefore, we couldn't prove that training had any effect on the business results. We've switched to using Fort Hill's 6 Ds methodology, now, and the difference has been night and day! It seems so simple - during the interview, ask the client what business need they want addressed by the training. Ask what metric shows the need for training. Build the training to move that metric. Measure the metric after the training event, and celebrate success!

We still care about the Kirkpatrick levels, but now we have a much easier time convincing executives that the training department adds value to the organization!

Lorne said...

Starting where you started - training the hunters and gatherers - you either do a good job or go hungry.

Somewhere, somehow you've got to get to results or explain to the village all the advantages of going hungry and why is wasn't your fault. Let's hope they're not cannibals.

Lorne
ArmstrongResults.com

Tim G said...

One thing we've found is that, in order to be able to effectively measure success, we have to be REALLY clear about the goals of the training PRIOR to the actual training.

Training is often done with fuzzy goals in mind (e.g. "more teamwork," etc), which makes measuring success really tough.

More clarity about what we're really trying to accomplish with the training (e.g. "trainees will demonstrate the 5 high-trust behaviors more frequently when interacting with the team") makes follow up measurement a lot easier.

That said, I'm a firm believer that many trainers also need to be students of Organizational Development (OD) - most initiatives that address soft skills and behavior change (vs. technical training) require environmental alignment and reinforcement in order to make the desired goals happen - training alone is unlikely to create the entire desired change.

Whew! Hope that's not too long :)

Dusty The Ocoee Rafting Guy said...

As of today, I have used the 1 and "none" system of effectiveness. Since no one in this world has the time to measure effectiveness effectively, everyday you get a 1 or 0 and you rate yourself also. So at the end of the month, the managers go through and look them over. You can broaden this into 3 categories. I understand that there is no feed back, but the managers can make notes for 1 on 1's at the end of the quarter.

Dan McCarthy said...

a number of comments were deleted by accident - or just disapeared - my apologies. Here they are:

Danny McCraine:
My organization struggled for years to find ways to measure our effectiveness using Kirkpatrick and Phillips-style ROI methodologies, with out any luck. In many cases, the business unit we were supporting had no way to return the business metrics they tracked to the individual contributor, therefore, we couldn't prove that training had any effect on the business results. We've switched to using Fort Hill's 6 Ds methodology, now, and the difference has been night and day! It seems so simple - during the interview, ask the client what business need they want addressed by the training. Ask what metric shows the need for training. Build the training to move that metric. Measure the metric after the training event, and celebrate success!
We still care about the Kirkpatrick levels, but now we have a much easier time convincing executives that the training department adds value to the organization!

Dan McCarthy said...

Lorne:
Starting where you started - training the hunters and gatherers - you either do a good job or go hungry.
Somewhere, somehow you've got to get to results or explain to the village all the advantages of going hungry and why is wasn't your fault. Let's hope they're not cannibals.
Lorne
ArmstrongResults.com

Dan McCarthy said...

Tim G:
One thing we've found is that, in order to be able to effectively measure success, we have to be REALLY clear about the goals of the training PRIOR to the actual training.

Training is often done with fuzzy goals in mind (e.g. "more teamwork," etc), which makes measuring success really tough.

More clarity about what we're really trying to accomplish with the training (e.g. "trainees will demonstrate the 5 high-trust behaviors more frequently when interacting with the team") makes follow up measurement a lot easier.

That said, I'm a firm believer that many trainers also need to be students of Organizational Development (OD) - most initiatives that address soft skills and behavior change (vs. technical training) require environmental alignment and reinforcement in order to make the desired goals happen - training alone is unlikely to create the entire desired change.

Whew! Hope that's not too long :)

Dan McCarthy said...

Dusty The Ocoee Rafting Guy:

As of today, I have used the 1 and "none" system of effectiveness. Since no one in this world has the time to measure effectiveness effectively, everyday you get a 1 or 0 and you rate yourself also. So at the end of the month, the managers go through and look them over. You can broaden this into 3 categories. I understand that there is no feed back, but the managers can make notes for 1 on 1's at the end of the quarter.

Dan McCarthy said...

Danny -
I like it! It's simple, yet seems to get at ROI. Of course, that's assuming the client can give you a metric. Thanks.

Lorne -
Ah, when life was so less complicated. (-:
Thanks.

Tim G. -
Not all all, thanks. I agree 100% on both points.

Rafting guy -
hmmm, wonder if your system would work for training programs?
Thanks.

Dan McCarthy said...

Here's another that got deleted:

Dan,

I enjoyed reading about your survey ideas for measuring the effectiveness of training. Typically, I develop an assessment instrument based on behavioral and knowledge outcomes and administer it pre-training (it provides a framework for the learning to come and identifies perceptions of abilities on a 1 to 10 scale). Then I administer the same instrument post-training asking for two ratings. One is for their post view on their pre-view (i.e., what's your perception now of what you thought you knew/did) and the other is their post-view of their current perception (at the end of class). I compare the results anonymously to document perceived learning gain. Your ideas about 90-day later assessments administered to participants and their managers is superb for taking this forward to obtain transfer and business results data. Thank so much for this post. Found you on Twitter today and will follow you!

Helen
Performance Advantage Inc., NJ


Helen -
I like it! Sounds like we've evolved in this biz, and are coming up with better ways to evaluate training. Sounds easy and credible.

HRMexplorer said...

A great process. What what I would add is simply this. Before any training is even undertaken. there has to be a process of understanding what it is you are going to evaluate. I developed a process to identify the key measures/metrics before any training took place and aligned them to the persons objectives.

Wally Bock said...

I've been training, in one situation or another, for almost all of my working life and I've noticed two things. One, most of the "evaluation instruments" ("smile sheets") ask people to evaluate the wrong things. The other thing I've noticed is more substantive. The vast majority of organizations I've experienced make no effort, none, to evaluate the actual impact of training. You seem to have come to similar conclusions but far more knowledgeable than I am about training issues, so you've got good advice about what you ought to do.

That's why I included this post in my weekly selection of top leadership posts from the independent business blogs.

http://blog.threestarleadership.com/2011/05/18/51811-a-midweek-look-at-the-independent-business-blogs.aspx

Dan McCarthy said...

HRMexplorer -
Thanks, great suggestion!

Wally -
Thanks, I'm honored!

Sharon Sarles said...

Yes, seldom are the times that business objectives are matched with training outcomes. Instead, most of the time the decision is pushed down to buying "training." When I am with a small business client this usually happens out of cost savings; they go for just the program rather than the pre- and post- research and follow-up that could dramatically improve their business.

Therefore, I think it would be very counterproductive to make a rule that trainers don't formulate their own courses. This is a furthering of the same public school assumptions over college assumptions -- with analgous outcomes. Let me explain.

"Trainers" teaching "curriculum" are purchased units of training time with virtually no business results. An expert in the field, investigating the social system of the company and devising a unique strategy to make the one tweak that will create the most positive change determined by business objectives and delivering it to humans (who will listen once listened to) and coming out with a changed social system -- IS NOT THE SAME THING! Typically buyers will choose low cost, not ever realizing the enormous difference in value.

So while getting the smile sheet stuff right (instead of setting it up for trainer success OR mindlessly for their inevitable failure) is okay, looking at the big picture for business objectives is a great deal more important.

Simon Meadon said...

Dan,

You make some really interesting points. People often perceive training evaluation as something either incredibly easy (just write a few questions on a form) or incredibly hard (it's a precise science that only a few experts really grasp). In truth it's neither of these. It's not a difficult task, but it does need some structure (as you very ably point out!).

In my view the key element that has always been missing until the last year or so has been that there have been no really suitable tools for the job. Without wishing to taking anything away from them, those you've mentioned are either very expensive (and, dare I say it, from experience somewhat over-elaborate!) approaches or just simple survey software where you have to invent your own evaluation questions.

In our company (logistics) we use a website called TrainingCheck.com for our evaluations and found it to be very good. It's basically survey software but it has ready-made Kirkpatrick evaluations at the (updated) four levels, a 'Return on Investment Calculator' etc. The evaluations need a bit of tweaking but generally they do the job well and the analysis and reports are good too.

I don't want to sound like an advert for them, but I think your readers should know that the proper tools for the job are now becoming available.

Simon

Dan McCarthy said...

Sharon -
Sounds like you've had a lot of experience with this, thanks for your insights.

Simon -
That's OK , as long as the plug is relevant. Thanks.

Richard Eason said...

Hi Dan, thanks for your post. I echo the thoughts of many of the others that have commented that identifying what business objective the training will be linked to is vital (before the training is designed if possible).

Hopefully we won't get too distracted by creating slick dashboard reports - we need to be able to rely on the data behind those reports if we're going to feel confident about the decisions that we make.

In my experience, it can be a good idea to target your high value training programmes and conduct a comprehensive evaluation on these. This has the potential to add more business value than applying a wishy washy approach across the board.

Regards, Richard

Dan McCarthy said...

Richard -
I like your approach of juet targeting certain programs (high value, pilots, etc...). Thanks.

Team Building Experts said...

well that makes a lot of sense.. thanks for sharing and pointing out the weakness and highlights of an effective leadership training. keep it up!

Traininghat said...

This is a great article. Many times, I have run into challenges finding time to evaluate effectively. Eventually, I designed a streamlined process called "Stoplight Evaluations."

For a quick overview and starter template, visit the link below:

http://www.traininghat.com/blog/bid/131735/Technical-Training-Evaluation-at-the-Speed-of-Light