Thursday, June 24, 2010

Anatomy of an eLearning Lesson: Nine Events of Instruction

By Shelley A. Gable

You’re tasked with outlining an eLearning lesson. You’ve analyzed your content and audience, and you have a clear understanding of what learners need to be able to do by the end of the lesson.

But how do you avoid designing a lesson that’s little more than a basic info dump?

How do you truly engage learning?

A handful of instructional design models offer formulas for assembling training in a way that captures learners’ attention, conveys content, and provides learners with an opportunity to practice and receive feedback on new skills. One of the more popular models is Robert Gagne’s nine events of instruction.

Here are the events:
  1. Gain attention
    Spark learners’ interest and curiosity to motivate learning

  2. Inform learners of objectives
    State training objectives or goals to communicate expectations

  3. Stimulate recall
    Include questions or an activity to engage existing knowledge to which learners can relate new content

  4. Present content
    Present the new content learners must learn, preferably with a variety of media

  5. Provide learning guidance
    Elaborate on presented content by telling stories, explaining examples and non-examples, offering analogies, etc.

  6. Elicit performance (practice)
    Prompt learners to practice using newly learned skills and knowledge

  7. Provide feedback
    Provide immediate and specific feedback to learners while they practice, to help shape their behavior to improve performance

  8. Assess performance
    Test learners on newly learned skills and knowledge to confirm that they’ve met the originally stated training objectives or goals

  9. Enhance retention and transfer to the job
    Provide support to ensure learners apply newly learned knowledge and skills on the job (e.g., post-training follow-up plans, job aids, etc.)

Although you may encounter situations when it’s not practical to include all of these steps in training, and sometimes you might apply these steps in a different order, this formula provides the basic structure you need to begin designing training that goes beyond basic communication.

Let’s look at an example of how this formula can be applied to a short eLearning lesson. This lesson is part of a larger eLearning course designed to teach experienced support staff in a small lending firm how to conduct quality control checks on mortgage applications. The purpose of this particular lesson is to teach learners how to identify errors.

-1- Gain attention
Prompt learners to guess the percent of mortgage applications that have errors (could set up as a multiple choice or free response question). After learners attempt to guess, reveal the alarming statistic. Then briefly explain to learners that they can dramatically decrease that number, and outline some of the positive impacts of catching errors.

-2- Inform learners of objectives
State: After completing this lesson, you will be able to identify errors on Application 1487B.

Note that this is not the standard three-part objective (behavior, criterion, condition) that we should write when outlining the course. Although opinions on this vary, many believe that it is not necessary to present the entire objective to learners and that a simple purpose statement is sufficient.

-3- Stimulate recall
Prompt learners to identify the types of application errors they’ve heard about (could set up as a multiple response question). Ask learners to recall the consequences of those errors (could set up as a free response or matching question).

-4 & 5- Present content and provide learning guidance
Guide learners through the application, and explain how each section should be completed. Provide multiple examples of correct entries and common mistakes. When appropriate, ask questions to prompt learners to anticipate these examples based on their experience.

-6 & 7- Elicit performance (practice) and provide feedback
Present practice exercises in which learners identify errors (or the lack thereof) on sample applications. Provide immediate feedback to learners about the correctness of their responses, and provide hints as needed.

Practice exercises can be peppered throughout the presentation of content and learning guidance to break up the sections of the application. A final practice exercise could be handled as a game where the learner receives points for correct responses and is challenged to earn a certain number of points.

-8- Assess performance
Include a formal assessment at the end where the learner audits a few applications with varying types of errors. Provide learners with feedback after submitting the assessment and offer remediation as needed.

-9- Enhance retention and transfer to the job
Point learners to a job aid they can use on the job, and tell them where they can go with questions. Ensure that learners begin auditing applications shortly after they complete the training. If possible, assign learners to coaches who can check their early work and provide feedback.

In order to maximize training’s success, you must complement a model like this with instructional tactics that align with adult learning principles. Using this basic framework to begin designing an eLearning lesson can help ensure that you’ve included these critical components in your training.

Click here for another Anatomy of an eLearning Lesson: Merrill's First Principles.

Tuesday, June 22, 2010

Yes, your Captivate Sim can drive your Lectora Course

Autoadvance Lectora when the Learner finishes a Captivate Simulation


By Jay Lambert


Frequently, we embed Adobe Captivate simulations within a Lectora eLearning course. The two authoring tools actually work together really well.

But note that if you simply insert the Captivate simulation as a Flash swf file into Lectora without doing anything to limit the course navigation, then the learner can easily click right on through to the next page in the title. That's a bad thing if your simulation contains vital content (and if it doesn't, why is it there?). Since we typically use this technique for software training, if the learners skip out of the simulation, then they have missed the instruction.

Luckily you can use JavaScript to prevent this from happening.

Our most common scenario for including a Captivate simulation is as a demonstration. In that case, we just want to ensure that the learners finish watching the simulation. When they do, they're free to advance in the course.

To set this up is a 2-step process. You'll do one simple step in Captivate and the other in Lectora.

In Captivate


1. With your Captivate file open, go to the Edit menu and click Preferences.



2. Under the Project category, click Start and End.



3. Now locate the Project end options on the bottom right side of the Preferences box.

4. Click the Action drop-down and select Execute JavaScript.

5. Then click the three dots to enable the JavaScript entry box and type trivNextPage().



6. Click OK and Save the file. Now after you've published and inserted the file into your Lectora course, the Captivate sim will instruct Lectora to go to the next page when the learner reaches the end of the sim.

You could use this same approach for a guided practice.

Note that if your course includes a Captivate test, then you can also have Captivate set a variable upon the test being passed. And then Lectora could read the variable. Once you start using Captivate to set JavaScript, there are quite a few possibilities.

In Lectora


Once you've inserted the Captivate swf file into your Lectora title, disable the forward navigation for that page. Publish the course and launch the resulting HTML pages. When the learner takes the course, it will automatically advance at the end of the simulation, but not before.

That's all there is to it.

Note: The JavaScript command will not work in Lectora's preview mode. You must publish the course.

Tuesday, June 15, 2010

Amazon would make a good Instructional Designer

By Jay Lambert

As you likely know, Amazon's software is amazing. Whenever I sign on to the site (which is fairly often), they suggest a variety of things for me to purchase that are typically right on target. I see books on instructional design and eLearning, books and music that my kids will like, travel guides for my wife, the list goes on and on. And I also get emails from them every week advertising new arrivals--things that I absolutely must have. (That's why Visa knows me well, too; but that's another story.)

In other words, Amazon knows me and knows me pretty well. And it's not just me; they know their audience. No wonder they're so successful.

Amazon's approach fits really well with instructional design. Don't just trot out your content and go home. Get to know your audience.
  • Observe the training audience in action
  • Do some surveying and analysis
  • Tailor your learning delivery to a format that will engage them
  • Offer up what they feel they will need to bring about your learning objective successfully.

Knowing your audience impacts so many pieces of a learning initiative--the content itself, the format, the delivery mechanism, what might grab the learners' attention and what might not.

Does your audience need to know the content inside and out? Or do they need to know where to find it if they need it? (For more on this, see Pointing to the Five Moments of Learning Need.)

Does your audience have time for training? Company culture is an important consideration. For example, one of our clients has a very mobile and busy workforce; they respond best to short, very targeted modules, typically 10 minutes or less (learning theory says most do). Another client gets only a set amount of training time each quarter, so they opt for a much more immersive experience to gain the most they can out of each session.

Will your audience access the eLearning course from a desktop, a laptop, a mobile device, or maybe something else? What are the capabilities of their device? Be mindful of building eLearning that your target will be able to view. Not doing this would pretty much defeat the purpose right out of the gate.

And also be aware of the difference between attention getting and potentially objectionable. Amazon never pitches certain things to me and I appreciate them for it. How does this relate to instructional design (besides avoiding certain shock tactics)? The answer to this might sometimes surprise you. A friend of mine recently designed a somewhat cutting edge course with avatars, scenario-based learning, videos, and more--exactly what many companies wish for. And yet his audience, Baby Boomer engineers, hated it. In a post evaluation, they invariably asked for a less engaging, more straight-forward delivery, so that they could get back to work faster. Not what he expected at all. His audience was so irritated by the avatars that they missed the point of the content.

As you design your next course, think of Amazon. What clues of behavior have you observed or been told that will help you target the experience for your learners?

Be an Amazon, not a Webvan (which by the way, seems to be a part of Amazon these days).

Wednesday, June 9, 2010

Collecting Data from an eLearning Pilot

By Shelley A. Gable

At last! After weeks – perhaps months – of analysis, design, and development, you finally have a completed and fully functional eLearning course. Finishing that last step to completion is a proud moment. And a relief.

So now what? Roll it out to the masses!

No, wait.

Before rolling it out to the masses (assuming that’s your eventual intent), you should probably pilot the course with a small group of learners from your target audience. Run your pilot for a predetermined amount of time, collect some data to identify what worked about the course and what didn’t, and make some adjustments. Then, you might be ready for a full rollout. Or perhaps another pilot.

So now that we’ve decided to run a pilot, what’s the next step? At this point, many people are tempted to identify pilot participants and start drafting survey questions. But there’s a more systematic way to plan your evaluation.

First, identify the questions your pilot needs to answer.

If you’ve already started writing survey questions, set that draft aside for a moment. Forget surveys. Forget interviews. Just think about questions. What questions should your pilot evaluation effort answer about your eLearning course?

Your best bet is to work with your project team to identify these questions. And you’ll probably find it helpful to refer to evaluation models for guidance, such Kirkpatrick’s four level evaluation model.

Examples of questions might include:
  • Was the eLearning course and its activities easy to use?
  • Which topics or tasks did learners struggle with?
  • Did learners perform as expected on the job?


Of course, these are very general questions. There may be questions worth asking that are specific to your course. For instance, if you experimented with a branching type of scenario, you might ask a question about the effectiveness and/or appeal of that activity, specifically.

Next, identify who can answer your questions.

If you pulled out that survey draft, put it away again. At this point, we need to identify which stakeholders can answer the questions identified for the pilot.

Which answers must come directly from the learners? If the course has a blend of eLearning and instructor-led training, perhaps there are certain questions that trainers can help answer. Maybe there are questions that the learners’ supervisors should answer. Or maybe there are performance reports you should obtain.

Now, select data collection methods.

This is the step that many people mistakenly jump to first. Until you know what questions you’re asking and who can answer what, you’re really not in a position to make informed decisions here.

After all, the nature of the data you collect should be a primary driver of how you collect it. For instance, if your organization already has a reliable survey tool for collecting learner satisfaction for a course, it might make sense to use that survey. If you want to collect specific examples and stories from learners about their successes or lack thereof, your best bet might be an interview or focus group. If you need to measure on-the-job behavior, you might opt for observation. Naturally, many evaluation efforts employ multiple data collection methods.

Another driver of data collection methodology is resources. How much time do you have to conduct the evaluation? And what is the availability of your pilot participants? If your turnaround time is short, you might not have time to conduct several one-on-one interviews. If your audience is geographically disbursed, observation might not be practical.

When (and how often) should you collect the data?

Suppose you’re evaluating a two-week blended pilot course, and you intend to survey learners to measure their perceptions of the training. You’ll need to decide whether you’ll measure just once at the end, or whether you should collect data at earlier points as well. If you’re collecting on-the-job performance data, you’ll need to identify the appropriate times to collect data based on the tasks you’re measuring.

What else?

While this should be enough to get the gears turning, naturally, there are several other factors to consider, too. For example:
  • Who (and how many) should participate in the pilot?
  • How do you plan to analyze pilot data?
  • How and to whom should you communicate the results of the pilot?
  • What are the potential risks and mitigating steps for the pilot?


If you’ve evaluated an eLearning pilot, please share your tips and lessons learned in the comments. Or if you have questions or suggestions for future posts related to evaluation, please share those thoughts as well!

Tuesday, June 1, 2010

Incorporating the Learner’s Name into your Lectora Course

Unlocking the Power of Lectora Variables, Part 2


By Jay Lambert

In an earlier post on this blog, we walked through using a Lectora variable to control page navigation in your eLearning course. In this post, we'll look at using the learner’s name in your course content.

Including the learner’s name in your Lectora course can take many forms, such as 'Welcome to computer skills training, John.'

If your eLearning course is set as AICC or SCORM (under Title Properties/Content), Lectora automatically adds a set of tracking variables to your course so half your work is done for you.



In this case, the name variable is AICC_Student_Name. When a learner opens your course from within a learning management system (LMS), the course will be able to get the learner’s name from this AICC_Student_Name variable.



If you’re not using an LMS to host the course, you’ll need to add a form or Entry Box into your title where the learner can enter his or her name. You’ll then be able to manipulate the name they entered (let’s hope it’s the right one) and incorporate it into your course content as well. (As a best practice, rename the variable name to something that you’ll be able to find easily later. Lectora automatically assigns sequential numbers to new variable names; for example, the variable shown below was originally Entry_0002, but I renamed it Entry_StudentName to better reflect what it is – a text entry box holding the student name.)



Let’s assume you have the AICC_Student_Name variable.

Add a text box to the page where you want to display the learner’s name. Name your text box something that you’ll be able to find easily such as ‘StudentName’.



There shouldn’t be any additional text in the text box beyond what you need to figure out where to place it on the page; it’s just a placeholder and whatever is originally in there will be overwritten.




Add an action to the Page itself. The Action would be On Show (meaning when the page is first displayed), Change Contents. The Target would be the ‘StudentName’ text box. And the New Contents would be the AICC_Student_Name.



Now when the page displays, it will automatically get the learner’s name from the LMS and put it in the text box on the page.


   
If you aren’t using the AICC_Student_Name variable, but rather one you created with an Entry Box, then use that variable name instead.

You can do similar tricks to display anything you’ve previously captured in a variable – something else about the learner, a piece of text you had them enter on a previous screen, etc.

In a future post, we’ll look at other things you can do with Lectora variables to spruce up your eLearning.