Wednesday, August 31, 2011

Best Practices in the Next Generation of eLearning, Part 2

By Dean Hawkinson

This article is a continuation from the article I wrote earlier this month on a webinar led by David Mallon of Bersin and Associates. In this webinar, Mr. Mallon outlined the top 10 best practices for the next generation of eLearning. I would like to expand on three additional items from that list of practices.

What makes your organization relevant?

Before getting into the items from the top 10, Mr. Mallon challenged us to think about what makes our organizations relevant. With all of the changes in eLearing trends over the years, what is it that we are doing to keep up with these changes in our learning organizations?

Let’s take mLearning as an example. Is your organization beginning to think of development in small “nuggets” to prepare for this delivery media? What steps are you taking to move in that direction? There are obviously other ways to be relevant, but mLearning is a good example of staying relevant. Think about how you will keep up with these changing trends.

Here are three more items from the top 10 best practices.

Force Consequences – In describing this item, Mr. Mallon was referring to using activities that include scoring items or pass/fail. This is not referring to a standard quiz type of assessment, but more specifically things like games, simulations, augmented reality exercises and action learning. All of these activities provide feedback and scoring, which in turn motivates learners to do well. Including fun activities such as these to incent someone in the learning goes a long way, and it simulates environments closer to the “real world.” Think about how much more effective these activities would be in preparing someone for a job.

Use “We-Learning” – How do we create eLearning that includes social elements in the learning where interactivity is key? Historically, eLearning consisted of a course that a user would view on the computer, complete independently, and have the completion show in a company’s Learning Management System (LMS). Web 2.0 provides tools such as wikis, blogs, discussion threads, and other tools that will allow social interactivity. Why not develop the course, and include these social tools for interactivity among participants that are all completing the same course? It would add to the learning by allowing participants to share knowledge and experiences with each other and reflect on the content together, creating ways to reinforce the acquired knowledge. Some companies are using employee created videos in a YouTube style to share knowledge of processes and procedures.

Drop the Prefix – In the learning industry, we love to use terms such as mLearning, eLearning and other terms to describe types of learning. But, if we were to ask our audiences what these terms mean, chances are they would not know. So, what about dropping these terms when we communicate with our clients? It can go a long way in being relevant with our audiences, while maintaining relevancy within our industry.

What are some other suggestions you may have to keep our organizations relevant in the learning industry?

Wednesday, August 24, 2011

Specifying a Criterion in Performance Objectives

By Shelley A. Gable

If you’re in the Design phase of a training project, you’re likely formulating performance objectives and selecting instructional methods.

If you follow Robert Mager’s model for writing objectives, they likely contain three elements (not necessarily in this order):

  • Behavior: The observable behavior a learner must perform on the job
  • Condition: The circumstances in which the learner must perform the behavior (consider workplace conditions and available resources for the task)
  • Criterion: The standard the learner must meet in performing the behavior (think quality measures, speed, quantity, etc.)


Here is an example of these elements in play:

Given an image request and the archive system [conditions], recommend three images [behavior] that meet at least 75% of the criteria in the request [criterion].


This objective mirrors expected performance on the job. At this company, when the image librarian receives an image request (e.g., for a brochure or catalog), the librarian is expected to provide three options that meet at least 75% of the criteria indicated in the request.

But the goal of this post isn’t to provide a crash course on writing performance objectives. Instead, I’d like to take a closer look at objective criteria.

What is an acceptable criterion for a performance objective?

Many of the resources that explain how to write objectives suggest that a criterion should be specific and objectively measurable. Ideal candidates include just about any measure you can associate with a number – defect levels, speed/time, quantity quotas, etc. Like in the example above.

This makes perfect sense. But most of my projects include several objectives with behaviors that aren’t directly countable.

In the grand scheme of things, those behaviors impact key performance indicators, which we can assess as part of evaluating training’s effectiveness…but I still have some behaviors not clearly associated with a statistic.

This is often (though not always) the case with soft skills.

So do you drop the criterion?

Definitely not! (that heading was a trick)

Instead, attempt to briefly describe what the correctly performed behavior “looks like” or what it should accomplish.

For example: Given a scenario, ask questions that result in identifying the attendee’s reason for canceling the registration.

Though not associated with a particular statistic, one can observe whether the learner was able to identify the cancellation reason as a result of asking the right questions.

Here’s another example: Given a spec sheet and a scenario, create benefit statements that relate a computer to customer needs.

Admittedly, this requires someone to judge whether the benefit statement connected to customer needs, which leaves a bit of room for inconsistency; however, I’d argue that it still provides a reasonable standard for assessing success.

What kind of performance criteria do you use in objectives?

In instances where hard numbers are available for assessing a behavior, it makes sense to use that as the criterion. But how do you write an objective when the criterion is more qualitative than quantitative? As always, we’d love to see your opinions and examples!

Sunday, August 21, 2011

Best Practices in the Next Generation of eLearning

By Dean Hawkinson

I recently listened in on a webinar presented by David Mallon, with Bersin & Associates, called 10 Best Practices in the Next Generation of eLearning. He began by challenging us to think about what makes our learning organizations relevant in today’s environment, and he stated five types of learning that organizations typically use. He polled the audience to see which ones organizations typically use to deliver training, and the percentages were as follows:

  • Self-Paced (70%)
  • Virtual Classrooms (52%)
  • Video Based (43%)
  • Sims/Games (17%)
  • Digital Content Libraries (30%)

I found this breakdown very interesting, but it did not surprise me that the majority use Self-Paced and Virtual Classrooms, with the lowest percentage using Sims/Games. He referenced these statistics to preface the discussion on the top 10 best practices. I am going to expand on three of my favorites.

Think Big Picture - Shift your mindset from thinking of a single training “event” to a learning environment that is much bigger than a single event. For example, using a portal to bring employees together online to share learnings and experiences before, during and after the course is a great idea to take the learning to another level. You can read more about this approach in this article about Social Learning. This also might encompass some ongoing tools such as social games or other collaboration tools that take the focus of the learning solely from the course itself and shifts it to more of a collaboration approach among students and even current experienced employees who can share their knowledge with new hires. It is important for our learning organizations to be involved before, during and after the “event” that we have created.

Create small nuggets of content – Ever clicked through a long eLearning course, page by page, wondering if you would ever reach the end? How did that impact your learning? What if we use small nuggets of content, such as 5-7 minute videos, to help with learning? Some companies use video sites in true YouTube fashion to do this. In a lot of cases, these are videos shot by employees themselves to help each other out. Consider a retail store environment spread across the country with a library of these videos for employees to use for on-boarding and ongoing support. Need to learn how to repair something, for example? Watch a video created by your peers at other locations. This also lends itself to mLearning and using mobile devices.

Be There - Take the learning to the learner. Building on the mLearning concept, it is important that the learning go to the learners via whatever tools they have at their disposal (e.g., smart phone, tablet, laptop, etc.). In the diverse workforce of today, this is going to become more and more critical to the success of our learners and our organizations.

What is your organization doing to keep up with these best practices and move into the next generation of eLearning?

Wednesday, August 17, 2011

Using Custom Progress Bars in Lectora


By Joseph Suarez

“You are here.”

Don’t you love when a map tells you that? Knowing exactly where you are can help you decide where to go and what to do next. Lectora’s progress bars are like a “You are here” marker for eLearning courses, providing visual feedback to users about their progress. The three types of Progress Bars in Lectora are Timer, Table of Contents, and Custom.

Timer
This progress bar type can serve as a visual queue for time remaining/elapsed. The great thing about Timer progress bars is that events can be triggered once time has fully elapsed using an “On Done” action.

Table of Contents
When you want to visually indicate to users how far they have progressed in a course, a Table of Contents type progress bar is ideal. Even without a Table of Contents object in the course, this progress bar displays how far the user currently is relative to the total number of pages in a title, chapter, or section.

Custom
Custom progress bars offer the widest range of possibilities and greater control, but require a deeper understanding of how Lectora’s progress bars operate. Let’s look more closely at what is happening behind the scenes.

Two necessary components for any progress bar are “Range” and “Step Size.” The Range is the bar’s total number of progress increments, while Step Size is the length of one progress increment. So, for example, a 12 inch ruler has a Range of 12 inches and a step increment of 1 inch.


By default, every progress bar is set to 0 when a page is initially viewed, and a custom bar remains there unless acted upon by one of two actions: “Step Progress” or “Set Progress.” Step Progress moves the progress bar by one step (1 inch in our example), and Set Progress moves to any specified number within the range (anywhere from 0 to 12 inches on the ruler).

For some extra customization, use a variable to Set Progress instead of a static number. By using a variable in the “Position” field of a Set Progress action, you can adjust the progress bar dynamically. Just type the name of a variable inside the parenthesis of this statement: VAR(). That way, any change to the chosen variable will automatically be reflected on the progress bar. This is useful when you can’t predict what the progress number will be after the course is published, such as an average number, Fahrenheit to Celsius conversion, or other calculation.

Back in the ruler example, a variable named “Length” could be used to change the progress bar’s position based on the measurement of objects, or even a user’s guess at the length of an object.


There's plenty of room for creativity using custom progress bars. For more ideas and examples of progress bars, check out the How-to course available at Trivantis's Lectora University.



Wednesday, August 3, 2011

Being a Good Coach through eLearning Feedback

By Shelley A. Gable

I have a friend who plans to volunteer as an assistant coach for his son’s soccer team in the fall. He told me about some of the advice he found on the web about coaching, and I realized that much of what he learned can be applied to coaching in eLearning as well.

Most eLearning lessons contain knowledge checks of some sort, such as scenarios followed by a multiple choice question, hotspot questions that prompt learners to recognize something in an image, and simulations in which learners work through a procedure.

What happens when a learner answers one of these knowledge check questions incorrectly?

In many eLearning lessons, a box appears that says something like, “Incorrect. Please try again.”

That’s a missed opportunity.

Instead, we should write feedback for incorrect responses in a way that coaches the learner. We should provide hints that help the learner figure out the correct answer and/or understand why the selected answer is incorrect. That way, the mistake results in reflective learning, rather than just another guess to get through the activity.

If we take that coaching approach to incorrectly answered questions, then we can also apply some of the principles my friend learned about coaching a soccer team.

Below are a few of the coaching nuggets he shared with me and how I apply them to eLearning...

Teach life skills with sports skills. This sounds a lot like connecting a guideline for a specific situation to broader principles that apply in many situations.

In an eLearning lesson, the corrective feedback for a particular scenario could help learners to recognize cues within the scenario that lead to a correct response and explain how those cues align with related organizational values.

Encourage players even when they aren’t playing well. When learners answer a knowledge check question incorrectly, you could tell them that they’ll have an opportunity to apply what they’ve just learned to an additional scenario.

This can create a sense of accountability to figure out how to complete the task correctly, since they know they’ll have to do it again shortly. Additionally, it lets learners know that even though their first attempt was unsuccessful, they’ll have an opportunity to be successful before the training ends (an important part of helping learners feel satisfied with the training experience, as outlined in the ARCS model of learner motivation).

This approach could also feed into a larger remediation strategy for struggling learners. For instance, learners who answer all questions correctly might only need to complete a couple of scenarios. Learners who answer several questions incorrectly could move into another segment of training with additional practice scenarios. In doing so, you might transition to the remediation portion with a slide that reminds learners of key content, urges them to apply lessons learned from earlier scenarios to the ones that follow, and offers an encouraging statement about their probable success.

Manage the players’ parents. Many of us have seen (or at least heard about) the parents who get a little too involved in their kids’ games. While the connection to eLearning for this example is a bit loose, we can liken it to manager engagement. Just as the right kind of involvement from parents can affect kids’ sports performance, the right support from managers can improve learner performance on the job.

While we should take steps to engage managers as part of the eLearning project initiative, we can also nudge learners to proactively approach their managers about aspects of the training. For instance, if a learner struggles to answer certain types of questions or complete certain tasks, corrective feedback might encourage discussing the task with a manager or requesting an opportunity to observe a peer performing the task.

How do you coach learners in eLearning feedback?

If you have other principles you follow when writing feedback for incorrectly answered knowledge check questions, please share!