Wednesday, April 27, 2011

eLearning for Leadership Training – Making it Effective

By Dean Hawkinson

Are you in a position where you develop instructional materials for a leadership/management audience? Typically, this “soft skill” type of training is delivered by instructor led training (ILT).

But what about eLearning?

Can we use eLearning to teach some of these soft skills that are required by leadership in a corporate environment? I would argue that we can, provided the eLearning has some elements to make it a successful interaction.

Let’s say you are creating a course that teaches managers how to have a successful performance management coaching discussion with employees. In a typical ILT course, this might be taught using video demonstrations or role play exercises among participants with a debrief discussion facilitated by the instructor around what was done well and what could have been improved on.

Let’s take the same scenario and think about how an eLearning course could teach the same thing. Here are some suggestions:

  • Video - A lot of eLearning development software today allows for easy video integration that can be viewed on any computer. However, make your video clips short (to keep the learner’s attention), showing correct and incorrect interactions. Follow your video clips with knowledge check questions asking for the learner’s feedback on what they have just viewed, and providing appropriate feedback to the learner’s response. Also, keep in mind that videos do take up a lot of memory and can be very large in size which can present issues with storage on a server or your Learning Management System (LMS).

  • Interactive Simulations - Simulations can be used to replace the traditional role play exercises used in ILT. Think about the coaching example presented above. You can present the employee’s side of the conversation either in text or voice recording, and then provide the learner with multiple choice options for response as the manager. The course would then provide feedback based on the learner’s response. This can be effective, however, the drawback is that the multiple choice answers limit the learner to choosing something rather than coming up with the answer on their own and receiving feedback as they would with a role play exercise in a classroom. To get around this issue, you can even use a branching scenario where the next step of the conversation can be dependent on the previous response, providing some correct and incorrect responses with feedback.

  • Scenario-based Knowledge Checks - Similar to the simulations, the Knowledge Check assessment questions should be scenario-based, requiring the learner to think about their response. Since your objectives most likely target the application or higher levels of Bloom’s taxonomy, design your questions to align with those levels as well. They should have “real world” application, rather than just remembering facts.

  • Social Media - Using social media tools for collaboration among participants taking the eLearning course would be an effective way to replace the discussion and debrief that is part of classroom training. Ideally, a subject matter expert could interact regularly with the participants here and answer any follow-up questions. Keep the learning going!

  • Partnering with an on- the-job (OJT) Mentor - For leadership/management training, there is nothing like “real world” experience. Any eLearning course (or ILT for that matter) should be partnered with a mentorship/OJT/nesting period on the actual job.


I would argue that an eLearning course, by itself, would not be an effective way to train leaders/managers in your business. A good leadership training program will include a blended learning approach that includes portions of Instructor Led Training (ILT), eLearning modules and “real life” OJT/mentorship on the job with more experienced managers.

What experience have you had with using eLearning for training a management/leadership role in your organization? Feel free to share your experiences.

Wednesday, April 20, 2011

What Makes eLearning Boring?

By Shelley A. Gable

Most posts on this blog focus on what to do and how to do it – providing navigational cues, designing with social media, stimulating recall, forming sticky ideas, and so on.

In this post, we’ll look at what we do too much of, resulting in boring eLearning.

Too much text.

It’s popular to hate PowerPoint because of the way a slide full of bullets strangles the life out of a presentation. But it doesn’t have to be that way.

Have you watched a TED talk? Those presenters use text sparsely and convey messages with images more often than not. Though it may not be practical to avoid text completely in eLearning, we can minimize it through diagrams, images that communicate, and occasional videos. Think visual design.

Discovery learning can help too. Instead of telling learners all they need to know, pull them into an activity, or a problem to solve, early in the lesson. Gradually provide the information they need through coaching along the way. This helps shift the focus from reading to doing. Even if the “doing” still requires reading, it’s likely to feel more purposeful.

Though it’s not our only option, we can accomplish this in PowerPoint. The slides are simply a blank canvas – we make them interesting or boring.

Too much detail.

The reminder here is to weed out nice to know from need to know information. For instance, I’ve seen many systems training modules that list steps in a system-driven procedure and offer tips for completing the procedure correctly.

I say “system-driven,” because I’ve seen this with procedures where the system literally leads the user through the steps and controls for certain types of errors. This means that filling eLearning slides with detailed steps and tips is probably unnecessary. The learner just needs to know when to use the procedure and how to get it started (perhaps followed by a simulation to give a feel for the flow)...which requires much less text.

Too much repetition.

We know that reviewing content and repeating important points helps solidify information in memory. But what about other forms of repetition?

How much variation do you design into your eLearning activities and knowledge checks? Do your knowledge checks always take on the structure of a short scenario followed by a multiple choice question? Do they maintain the same level of difficulty, even as the learner progresses through training?

Even rapid authoring tools generally have a variety of interaction types available. And in many cases, a smaller number of rich and highly interactive activities may be more impactful than numerous short, similarly structured knowledge check questions.

Too much formality.

Although we should write training materials concisely, they don’t have to lack personality. Stale writing becomes boring fast.

We’ve probably all had the experience of reading a textbook, only to reach the end of a page and realize that we remember nothing from the past couple of minutes. We were reading, but we were not engaged.

It’s possible to write professionally and conversationally. Think about blogs – many opt for an informal tone, yet they communicate professionally. Telling stories can help. And occasionally convey enthusiasm. Write to inform...and even entertain from time to time.

What else do we do too much of?

In what other ways do we challenge learners to stay awake? Add your observations in the comments!

Wednesday, April 13, 2011

In Defense of the Four Levels

By Shelley A. Gable

Over the past year or so, I’ve noticed several comments about how Kirkpatrick’s model of four levels of evaluation is outdated.

I don’t agree.

The debate reappeared on my radar in a Twitter #lrnchat session a couple weeks ago. Evaluation was the discussion topic, and several tweets mentioned that the model originated in the 1950s, a lot has changed since then, and we ought to follow a more current model.

Before digging into its pros and cons, let’s do a quick mini-review of the model.

  • Level 1: Learner opinions. Did they like the training?

  • Level 2: Performance during training. Are learners meeting the stated objectives (typically via quizzes, practice activities, skill assessments, etc.)?

  • Level 3: Performance on the job. Are they demonstrating the expected behaviors on the job? Are they meeting expectations in terms of quality, quantity, etc.?

  • Level 4: Organizational impact. How did training outcomes impact the organization, quantitatively and qualitatively? What was the return on investment?


So now let’s dig in....

Does the model include anything irrelevant?

I don’t think so.

An initial gap analysis should identify specific business needs (level 4) and what is required to fulfill those needs (level 3). So it makes perfect sense that we’d evaluate those same things later and report results when we can.

It also makes sense to assess learners’ knowledge and performance during training (level 2), for the sake of corrective coaching, encouragement, and potentially offering additional support to help learners prepare for on-the-job application.

When it comes to learner satisfaction (level 1), there’s a lot of talk about the tendency to focus too much on this and too little on the other levels. I agree that’s a mistake. That said, I still want to know how learners felt about their training, for the sake of improving the experience, working out bugs, and potentially helping to identify the causes of any results gaps.

Does the model leave anything out?

I can’t think of anything.

Though it’s common to track “butts in seats” and other attendance-related metrics not accounted for in the model, these measures seem more related to staffing and forecasting as opposed to training results.

We talk a lot about the need for improved diligence in the field with measuring job performance and business results. I agree that we should do this consistently. And so does the model (levels 3-4).

Shortcomings?

Like any model, Kirkpatrick’s four levels has limitations.

A few disclaimers: I’m not trying to suggest that it’s perfect. And I’m not trying to suggest that it covers everything we need to think about related to evaluation (and in fairness, I doubt this was ever the intent). Nor am I suggesting that following the model makes evaluation easy.

Some criticize the model because it seems to focus exclusively on a learning event, when learning is actually an ongoing process. Even if Kirkpatrick was thinking about learning “events” when introducing the model, I think the levels can apply to learning as an event or an ongoing process. The model suggests the types of results to measure. It’s up to us determine the subject we’re evaluating (event vs. something ongoing) and how to collect and analyze the data.

Some criticize the model because it neglects confounding variables, such as post-training support from learners’ managers and ongoing accountability. These are just a couple of examples, and I agree that factors like these are critical to a training initiative’s success. So perhaps we still measure the things outlined by the four levels, while also investigating how those other key variables helped or hindered the effort (Brinkerhoff’s Success Case Method can help).

Many insist that we should flip the model upside-down, with organizational impact and on-the-job performance as the first levels. I can see how numbering the levels might suggest prioritization or sequence. So with that interpretation, presenting the levels in reverse order makes a lot of sense.

The bottom line...

My view of Kirkpatrick’s model is that it suggests what to measure to help determine how successful a training initiative was, but it doesn’t spell out how to scope and execute the evaluation effort.

So with that in mind, I believe that the model is still relevant and useful today. Even if we need other information sources to help us with the rest.

Do you agree? Disagree? I’m sure I haven’t thought of everything here, so please take a moment to share your two cents!

Monday, April 11, 2011

Text-to-Speech Functionality in Captivate

By Dean Hawkinson

There are a lot of arguments about using audio in eLearning – some in favor, some not.

Audio narration can be very time consuming, and in many cases requires hiring talent for a professional sound. Many eLearning development tools allow you to easily record narration with your content, including Adobe Presenter and Adobe Captivate.

However, what if you don’t have the budget to hire the talent to make it sound professional? What if you, as the Designer, do not feel that you have the voice for the recordings? What if you simply do not have the time to devote to recording audio in the first place?

Adobe Captivate includes a text-to-speech function that allows seamless narration without having to do any recording.

About the Tool

Captivate’s text-to-speech tool is very simple to use and allows you to type the text that you want narrated on each slide. You can select a female voice (Kate) or a male voice (Paul) for each individual slide, which allows you the freedom to mix it up a bit and use both narrators in one course. It also adds the feeling of having a trainer guide you through the process.

Captivate takes what you type and converts it to a voice-over for each slide. You can then use the timeline to position all of your effects to match the narration of the slide. If you play the slide and find that you made an error, fixing it is a simple matter of just re-typing the text.

Text-to-Speech Challenges

As with any software application, there are a few challenges to using a text-to-speech tool:

  • The voices can sound a bit robotic – Although not as bad as a monotone computer, you can tell that there is a robotic edge to the narration. However, in my experience, it is not as bad as you might think (anyone ever have a “Speak and Spell” toy as a kid?). The voices actually do sound professional, but you can always tell that it is not a real human doing the narration.
  • There are some issues with pronunciation and voice inflections – I have noticed that the narration has some interesting pronunciations of certain words. For example, the word “detail” comes across as “dtail” (doesn’t pronounce the “e”) and “status” is pronounced “stay-tus.”
  • It Shouldn’t be used for long narratives – The primary use for Text-to-Speech should be to walk learners through a system simulation by guiding them through the clicks. This calls for short narrations. However, if you need to explain something in detail, it might be better to use text on the screen and have the narration refer them to that text. The robotic sound is probably not best for longer narratives.

Ways to “Trick” the System

When you type out the narration text, no one will ever see it. So, you might end up purposefully mis-spelling some words to get the system to pronounce them the correct way.

For example, I really don’t like the way the system pronounces the word status. So, to get around it, I typed “staatus” and the narration pronounced it with the short A sound. The same thing can be done with the word detail. I typed “deetail” and it pronounced it the way I wanted it to. You may find some other ways to “trick” the narration to correctly pronounce certain words.

Is Text-to-Speech for Everyone?

As always, before selecting media and technology for eLearning, you need to consider your audience. You need to match the technology to your instructional goals, not the other way around. Ask yourself:

  • Is audio necessary?
  • What are your budget and time constraints?
  • Will your audience look past the occasional robotic pronunciation?

It might be better to use professional narration if you have the resources. However, text-to-speech is a great alternative when you have a short time frame for your project and do not have the budget to hire voice talent.

What is your experience with using this type of technology? Do you have any additional suggestions for using text-to-speech in your eLearning courses?

Tuesday, April 5, 2011

Standing at the Crossroads – Providing Navigational Clues to Help Learners Find Their Way

By Donna Bryant

Imagine that you are out for a drive in the country. You’re cruising along, and you pass several little towns. Soon, you realize that you are in an unfamiliar area. You consult your maps, but you still aren’t sure where you are. You drive up to a crossroad, where the road veers off to the left on one side and to the right on the other. There is no sign to tell you which way to go.
Sometimes, learners feel this way during an eLearning lesson. What causes learners to lose their way in lessons? What can instructional designers do to help keep learners going in the right direction? This post offers advice for two scenarios where learners may lose their way in eLearning.

Variance in navigation

Most simple lessons are fairly straight-forward with how the navigation works. You have forward and back buttons for page-to-page, linear navigation, and usually a home button to go back to the beginning of a lesson.

But suppose you’re designing a lesson where learners need to open and use screens from a program to answer questions during the lesson. A variance in normal lesson navigation can confuse learners if it is not explained well, and if learner attention is not drawn to the variance.

What can a designer do to draw attention to a variance, and explain it clearly for learners? Here are some ideas (examples shown in the picture following):
(1) State clearly and with minimal words what the learner should do. Don’t make learners “wade through” lots of words to find needed instructions. Use a different sized font to draw attention.
(2) State the reason for the variance. Remember, adult learners need to know ”why.”
(3) Provide word clues tied to the action needed. In the example below, the word “five” reinforces to learners that there are a total of five screens to open.

Instructions that need images with callouts

When providing instructions within a lesson, sometimes words are truly inadequate to provide enough guidance to complete a task successfully. For example, you might need more than words if you are explaining how to use a delicate piece of equipment, or if you are writing procedural steps for a complicated process. Instructions can especially be difficult to convey if learners already have an idea in their minds as to how a product and its instructions should work. But if the product works differently than expected, then the instructions will also be different than what might be expected.

An example in point: Removing a memory card from a digital camera.

Most of us have experience removing memory cards from digital cameras, and we have a good idea how it should be done. Usually, memory cards are located on a side area of the camera by themselves. How would you convey instructions if your camera’s memory card is located over the camera’s battery? The care with which you remove the card so you don’t also remove the battery is important.

In a case like this, a picture really helps to show how to remove the memory card without disengaging or scratching the battery. But the pictures by themselves are not enough. Callouts used with pictures provide additional guidance, especially if text is minimal. Here’s an example:

In an earlier post, Dean Hawkinson used a similar technique of using pictures to guide learners. Instead of callouts, Dean used color emphasis to draw attention to screen areas. Check out Dean’s post Publishing in Adobe Presenter for some ideas.

What are some examples where you provided clues and guidance to learners to help them stay on the right track? I would be interested in hearing about your experiences!