Wednesday, November 30, 2011

Isolating the Results of eLearning Impact

By Shelley A. Gable

A recent project renewed my interest in Level 3 and Level 4 evaluation methodologies. That led me to purchase the book, Isolation of Results, by Jack Phillips and Bruce Aaron. Since factors beyond a training effort can influence employee performance – such as marketing campaigns, hiring strategy, and other business initiatives – this book describes ways to calculate how much credit a training effort can claim for improved performance.

To make sure we’re on the same page, Level 3 evaluation refers to measuring transfer of training to the job in terms of observable behaviors. Level 4 is about organizational impact, including return on investment. For more detail, skim a quick review of Kirkpatrick’s four levels of evaluation.

I initially read the book rather casually, at the pace I might read a novel, simply for the sake of getting the gist of the authors’ ideas. With that approach, the 121-page book is a relatively quick read.

Here’s a summary...

Like many books on training evaluation, the authors begin by making a case for the importance of evaluation. The idea is that if we cannot show our clients how training impacts the organization’s bottom line, we risk losing influence, credibility, and possibly funding.

The bulk of the book describes variations of three main approaches to isolating the impact of training.

--1-- Control groups. The book explains that using a control group approach tends to be the most accurate way to isolate training results. Put simply, using a control group involves comparing the performance of two groups: one that receives training and another that does not. That’s the idea...though honestly, that explanation oversimplifies it.

Thankfully, the book acknowledges the challenges many organizations face with using a control group approach, such as the difficulty in forming two equal yet randomly selected groups and the eagerness of clients to apply a training solution broadly in the organization. With that in mind, the authors not only describe the ideal approach to using a control group, including what to keep in mind when selecting individuals for those groups, but they also describe alternative control group approaches. Even if you’re already familiar with the basic concept of a control group, you might pick up some new ideas from this book.

--2-- Trend lines. This approach could work rather well or be incredibly unreliable, depending on the organization you are working with. The first step is to gather historical performance data on the group receiving training and plot performance over time on a graph. For instance, you might plot monthly sales figures from the past two years. Based on that, calculate a trend line to predict what performance would likely be in the future. The book explains the mathematical model for this, and many data-oriented applications (such as Microsoft Excel) can figure this out.

Next, identify other factors that might influence performance and find out their projected impact. For instance, if an upcoming marketing campaign is scheduled, find out what its anticipated impact is. After identifying the anticipated impact of the various other factors identified, training can take credit for any additional improvement in performance.

Considering a quick example, suppose that a sales team is scheduled to complete an eLearning course on sales skills in June. Your initial performance trend line predicted that sales in July would be $20M. The company is running a marketing campaign that is expected to increase sales by $2M in July. If actual sales for July were $23M, it stands to reason that training can take credit for $1M of the increase.

Admittedly, this is another oversimplified explanation. But if this approach interests you, the authors describe it well in the book.

--3-- Expert estimates. This approach involves simply asking people, such as training participants and/or their managers, to estimate the extent that training improved their performance. This can also involve having them identify other factors that influenced their performance and asking them to estimate the amount of influence those factors had as well.

While the authors admit that this is a controversial approach, they also offer arguments in favor of its credibility. They go on to describe ways to obtain data, such as through surveys and focus groups. They emphasize the importance of making training impact estimates conservative, and they explain how using confidence ratings can help you make estimates appropriately conservative.

A couple thoughts for application...

In the spirit of scenario-based instruction, the authors provide several case studies that illustrate the various approaches they describe. At times, the explanations of the approaches can seem complex, but the scenarios illustrate how to make those approaches feasible.

It’s also worth noting that the book assumes you are reasonably well-grounded in Level 3 and Level 4 evaluation. You certainly don’t have to be an evaluation expert to understand the logic presented in the book…but if you have had little exposure to those levels, it’ll likely be more challenging to try to apply their ideas.

What have you read about evaluation?

Have you read this book? If so, what did you think of it? Have you applied the ideas from it? Or, are there other books about evaluation methodology you would recommend?

Wednesday, November 16, 2011

Custom Branching Navigation with PowerPoint

 By Joseph Suarez

PowerPoint-based rapid eLearning tools, such as Articulate Presenter and Snap by Lectora, allow branched navigation, meaning you can create a non-linear navigation path still controlled through the course player’s next and back buttons.  But, did you know it’s also possible to add your own custom navigation buttons onto a slide? Not only is it possible, it’s really simple!

There are just three steps involved. With your course open in PowerPoint 2007 or 2010:
  1. Create a text box, insert a shape, or add an image onto a slide
  2. With the new item selected, click “Action” on the Insert ribbon menu
  3. From the Action Settings box, check “Hyperlink to” and choose a desired action 

Here is the full list of all available actions (some may not apply to course development):
  • Next Slide
  • Previous Slide
  • First Slide (restarts course)
  • Last Slide
  • Last Slide Viewed
  • End Show (closes course)
  • Custom Show
  • Slide (links to a specified slide)
  • URL
  • Other PowerPoint Presentation
  • Other File
It’s also possible to add most of what is on this list to an object by clicking “Hyperlink” (the button left of “Action” on the ribbon). But, since actions are simpler to add and have more options, you might as well just use them instead.

Be sure your text describes the action accordingly, and write from a user’s point of view as a course taker. Here are some examples: 
  • “Review Course” - returns to first slide
  • “Exit Course” - uses “End Show” action
  • “Retry Quiz” - links to a specific slide with a quiz
Taking the idea one step further, it’s possible to combine these custom navigation buttons with the rapid eLearning tool-controlled branching. A perfect example would be a quiz branching to pass and fail slides as shown in the basic example below.

First, create the necessary pass and fail slides after the quiz slide.

Next, set up the slide branching with your rapid eLearning tool. Articulate Quizmaker can branch to different slides on pass/fail directly. Snap allows the same ability through the “Slide Explorer” screen.

Then add your links or buttons with custom actions to the corresponding page. In this example, the fail page doesn’t allow the user to exit the course. Instead, a user must either retry the quiz or review the course material (return to slide 1), which eventually leads to another attempt to pass the quiz. Once the quiz is passed, the “Exit Course” link uses an End Show action to close the course.

Techniques like this can be employed throughout a course to both enable and enhance branching navigation. Just be sure not to go overboard with the idea. Too much jumping around can disorient the user taking the course.

Wednesday, November 9, 2011

Building eLearning Scenarios in Working Sessions with SMEs

By Shelley A. Gable

We know that scenarios benefit performance by immersing learners into workplace situations within training. The storytelling quality of scenarios helps make the lessons learned in training memorable. And there are many ways to incorporate stories and scenarios into eLearning.

But how do you write these scenarios in the first place?

After all, crafting a realistic scenario requires leveraging tacit knowledge that only a subject matter expert (SME) might possess. Knowledge that is often not documented, even in the most comprehensive knowledge management systems.

Solution: Schedule a working session to partner with a SME on scenario writing.

By a working session, I mean a meeting where you and a SME draft the text of the scenario together.

Since I work with clients virtually, for me this means sharing a document in a web-conferencing session and typing out the details of the scenario as we discuss it.

From your analysis efforts at the start of the project, you likely have a sense of how a particular scenario should be structured and what skills it should prompt learners to exercise. You might even know which situations to base the scenarios on.

You probably know enough to build a basic structure, but you need the help of a SME to fill in blanks with realistic details.

Here’s a quick example...

I recently designed training on negotiation. I knew that the scenarios needed to follow the basic formula below.

  • Learners start with minimal background information about the other party

  • Learners ask questions to learn about the other party’s needs

  • Learners position an offer

  • Learners resolve objections by asking additional questions to clarify concerns

  • Learners either reposition the benefits of the original offer or modify the offer

  • Learners confirm and set up the agreed-upon resolution

  • Learners must be able to do the above with people of varying levels of cooperation

Based on prior conversations with the client, I knew what types of scenarios to create. What I needed help with was writing realistic dialog.

Before meeting with the client, I created a storyboard of the eLearning lesson to lay out the intended structure of the scenario. I also depicted the steps of the scenario in a flowchart, so the client could easily understand and validate the scenario’s flow.

During the meeting, I simply asked the SME questions to create dialog for the scenario. I asked questions such as:

  • In this situation, what questions would your best performers ask the other party?

  • What questions do less experienced or struggling performers ask?

  • How does a typical person respond to each of those questions?

  • How does an uncooperative person tend to respond to each question?

  • How would your best performer position the offer?

  • How would your less experienced or struggling performers position the offer?

  • What objections do you typically hear in response to an offer like that?

  • And so on...

I typed the dialog into the storyboard while the SME answered my questions. Dialog for the best performers became the correct answer for each step in the scenario. Dialog for the struggling performers became the distracters (i.e., incorrect options) in the scenario. Typical responses from the other party become part of the feedback for each step in the scenario.

Is this how you write scenarios?

Some who read this may shrug their shoulders and think, “this seems basic – this is what I’ve always done.” If you’ve conducted a task analysis, the logic above is likely familiar.

But I never used to do this.

After conducting the analysis, acquiring access to relevant information, and gaining buy-in from the client for the training design, I would try to write the training materials as independently as possible. That included writing scenarios myself, perhaps just asking a SME a few clarifying questions when needed. And I know at least some of the other instructional designers I’ve worked with have taken this more independent approach.

Working independently on scenarios may work if the instructional designer is also a SME. However, when the instructional designer is not a SME, there’s a risk the scenarios will lack the level of detail needed to make it as realistic as possible. These types of details often emerge through conversation, but might not come up from an approver who is simply reviewing the training materials for accuracy.

So what do you do? Are you among those who try to write scenarios as independently as possible? Do you take a collaborative approach with SMEs? Or do you do something else? Please share!