Wednesday, November 30, 2011

Isolating the Results of eLearning Impact

By Shelley A. Gable

A recent project renewed my interest in Level 3 and Level 4 evaluation methodologies. That led me to purchase the book, Isolation of Results, by Jack Phillips and Bruce Aaron. Since factors beyond a training effort can influence employee performance – such as marketing campaigns, hiring strategy, and other business initiatives – this book describes ways to calculate how much credit a training effort can claim for improved performance.

To make sure we’re on the same page, Level 3 evaluation refers to measuring transfer of training to the job in terms of observable behaviors. Level 4 is about organizational impact, including return on investment. For more detail, skim a quick review of Kirkpatrick’s four levels of evaluation.

I initially read the book rather casually, at the pace I might read a novel, simply for the sake of getting the gist of the authors’ ideas. With that approach, the 121-page book is a relatively quick read.

Here’s a summary...

Like many books on training evaluation, the authors begin by making a case for the importance of evaluation. The idea is that if we cannot show our clients how training impacts the organization’s bottom line, we risk losing influence, credibility, and possibly funding.

The bulk of the book describes variations of three main approaches to isolating the impact of training.

--1-- Control groups. The book explains that using a control group approach tends to be the most accurate way to isolate training results. Put simply, using a control group involves comparing the performance of two groups: one that receives training and another that does not. That’s the idea...though honestly, that explanation oversimplifies it.

Thankfully, the book acknowledges the challenges many organizations face with using a control group approach, such as the difficulty in forming two equal yet randomly selected groups and the eagerness of clients to apply a training solution broadly in the organization. With that in mind, the authors not only describe the ideal approach to using a control group, including what to keep in mind when selecting individuals for those groups, but they also describe alternative control group approaches. Even if you’re already familiar with the basic concept of a control group, you might pick up some new ideas from this book.

--2-- Trend lines. This approach could work rather well or be incredibly unreliable, depending on the organization you are working with. The first step is to gather historical performance data on the group receiving training and plot performance over time on a graph. For instance, you might plot monthly sales figures from the past two years. Based on that, calculate a trend line to predict what performance would likely be in the future. The book explains the mathematical model for this, and many data-oriented applications (such as Microsoft Excel) can figure this out.

Next, identify other factors that might influence performance and find out their projected impact. For instance, if an upcoming marketing campaign is scheduled, find out what its anticipated impact is. After identifying the anticipated impact of the various other factors identified, training can take credit for any additional improvement in performance.

Considering a quick example, suppose that a sales team is scheduled to complete an eLearning course on sales skills in June. Your initial performance trend line predicted that sales in July would be $20M. The company is running a marketing campaign that is expected to increase sales by $2M in July. If actual sales for July were $23M, it stands to reason that training can take credit for $1M of the increase.

Admittedly, this is another oversimplified explanation. But if this approach interests you, the authors describe it well in the book.

--3-- Expert estimates. This approach involves simply asking people, such as training participants and/or their managers, to estimate the extent that training improved their performance. This can also involve having them identify other factors that influenced their performance and asking them to estimate the amount of influence those factors had as well.

While the authors admit that this is a controversial approach, they also offer arguments in favor of its credibility. They go on to describe ways to obtain data, such as through surveys and focus groups. They emphasize the importance of making training impact estimates conservative, and they explain how using confidence ratings can help you make estimates appropriately conservative.

A couple thoughts for application...

In the spirit of scenario-based instruction, the authors provide several case studies that illustrate the various approaches they describe. At times, the explanations of the approaches can seem complex, but the scenarios illustrate how to make those approaches feasible.

It’s also worth noting that the book assumes you are reasonably well-grounded in Level 3 and Level 4 evaluation. You certainly don’t have to be an evaluation expert to understand the logic presented in the book…but if you have had little exposure to those levels, it’ll likely be more challenging to try to apply their ideas.

What have you read about evaluation?

Have you read this book? If so, what did you think of it? Have you applied the ideas from it? Or, are there other books about evaluation methodology you would recommend?


  1. Hi Shelley. I haven't read that book, but it seems to share many of the themes that I've only just blogged about myself: "The unscience of evaluation" -

    I'm putting "Isolation of Results" on my Christmas list!

  2. Great post, Ryan! And you're right - the book and your post do share many themes...that central theme being causality.


Thank you for your comments.