Tuesday, August 6, 2013

Remember Recency?

By Shelley A. Gable

If you haven’t encountered it lately, it’s possible you’ve forgotten about the recency theory of learning.

Recency is the tendency to be more likely to remember information from the end of a sequence. Cognitive theorists believe that as new information enters the working memory, earlier information is pushed out. Since the information entering at the end doesn't get pushed out as quickly, the brain has more time to process and remember the later stuff.

Why does recency matter for eLearning?

I’ve seen many eLearning lessons end with reiterating a lesson’s objectives. This seems to miss the opportunity to take advantage of the recency effect. Instead, we can end eLearning lessons in ways that prompt learners to recall important information or have a meaningful moment of insight.

How can we take advantage of the recency effect?

Consider these simple approaches to concluding lessons in a way that reinforces critical knowledge and/or prompts relevant reflection…

A fill-in-the-blank slide. A really simple approach I’ve seen is to simply end an eLearning lesson with a slide that restates some of the critical information from the training, perhaps with blanks learners must fill in to prompt them to recall (and further process) that knowledge themselves. You could ask learners to fill in blanks in a bulleted list of text. Or, you could have them fill in blanks in a diagram, table, or comparative matrix.

Reflective questions to connect concepts. Another simple approach is to create a slide with a few reflective questions about the content. The questions might challenge participants to make connections between the lesson’s content and related content from earlier in training. Or, you might pose questions that ask learners how the lesson’s content supports the organization’s values (if there is a clear set of values the organization actively promotes). You could also ask learners to list specific situations in which they will apply the lesson’s content to their jobs, or how the content will help them become more successful in their jobs.

Confidence check. You might end an eLearning lesson with a slide that prompts learners to rate their level of confidence in applying newly learned knowledge to their jobs. With this approach, you might follow up with questions that prompt them to list aspects of the content that were especially easy and/or challenging. For lower confidence scores or challenging aspects of the content, you can ask learners to identify ways they can further develop those skills to improve their confidence.

Social accountability. You could take any of the approaches described above and create a sense of social accountability for learners by asking them to share their responses using some form of social media, such as internal wikis or discussion boards. Alternatively, the training might include an expectation to discuss summative learnings and reflections with a manager or trainer within a specified timeframe.

How do you take advantage of recency?


What do you typically put on the final slide of an eLearning lesson? Do you use it to take advantage of the recency effect? If so, please share examples in the comments!

Wednesday, July 31, 2013

Focus Time and Effort with the 80/20 Rule

By Jonathan Shoaf

The 80/20 rule, also known as the Pareto Principle, roughly states that 80% of the results are caused by 20% of the effort. This rule is applied commonly in business situations where for example, 80% of your income comes from 20% of your clients. This principle is meant to be a rule of thumb to guide decision making.

As a software developer, I use this principle.  In many cases 80% of the user's desired outcomes can be accomplished by 20% of the application. I've always believed the development process for software applications and e-learning have a lot in common. In particular, time and cost must be balanced with functionality and results.

The Pareto Principle can be used to help focus time and effort to get the outcomes most desired. Don't have time to sit in 100% of the meetings? Identify the 20% of the meetings that cover 80% of the results and spend the most time analyzing those meetings. The subject matter expert doesn't have a lot of time to give on the project? Ask them to identify the 20% that needs to be learned to cover 80% of the outcomes.

I'm not saying to ignore the other 80% that is needed to fully cover a topic. However, I am saying there are realities that may keep you from being able to spend the time you need on a topic. Identify and invest in the 20% and your learners will be prepared for 80% of the outcomes.

Here's an example of where training often fails the 80/20 rule. A new software application is implemented at your organization. You are expected to train on the application.

The vendor provides training content and you are to convert it to training. Do you know where that content comes from? Here's the process:

Functional specifications are created for a software product. These specifications cover every thing the software is functionally able to do. What the software can do is not what the user necessarily needs to do. Following the Pareto Principle, the user may only need to use 20% of the software to accomplish 80% of the tasks.


The functional specifications are turned into help and documentation. Again, covering nearly 100% of what the software can do. What the users need to do? That's still not identified.


Next the training is produced. This is where failure often occurs. Training is created based on the documentation from the vendor. The thinking is that everything needs to be covered. Its an easy trap to fall into. Considering the Pareto Principle, training poorly on 100% of the application is not as effective as training thoroughly on the most important 20% of the application.


Therefore, focus needs to be given on the 20% of the software application the learner will use to create 80% of the outcomes.

Do you apply the 80/20 rule during the instructional design process?

Tuesday, July 16, 2013

Two Simple Rules for Evaluating E-Learning Project Changes

By Jonathan Shoaf

Let's face it, most requests for e-learning are vague at best. The client wants an e-learning about a particular topic, they put some PowerPoint slides together with lots of words and bullets (and no graphics!) and say "turn this into e-learning." Although the client will not admit it, they are thinking they'll figure out as the project goes along. This is why its important to have a development process.
  1. Background
  2. Project Description & Scope
  3. Storyboard
  4. Prototype
  5. E-learning
The earlier in this process you "figure it out", the less amount of work the developer needs to do and the less cost to the client. The goal is to work out the big hairy important details early. Later on in the process you want to be tweaking the details and not making major changes.

When change comes you will need to manage it and keep it from sabotaging the project. Handle all of the changes and the expense goes up leaving the client unhappy.  If you do not handle enough of the changes the client feels like they are losing control of the project to the developer.

I have found two rules to follow for prioritizing change in a project.  You can apply these when you see too much change coming and you need to sort out what changes to implement first.

1. If the client says it is important, the change should be at the top of the list.

You're not the client. You don't know why its important but the client does. The client will not be adament about something unless they have reason to be. If they are being a stickler about a change, ignore at your own peril. The reasons for the change can range from past mistakes made, past feedback given, company culture, or a better understanding of the learners. These are things the client knows but you don't.

If the client says it is important, then make the change. It can go a long way to building a relationship of trust between the developer and client.

2. If you think it is important, the change should be the next item on the list.

The client is relying on you to be the e-learning expert. They are not. You may know why a change is important, but the client does not. The reason it is important to you may include your understanding of how learners interact with e-learning, your understanding of bandwidth issues, your understanding of how the change impacts the clients most important requirements, your experience with iPads versus desktop computer, and more. Trust yourself. The client will learn to trust you.

The rest of the changes are less important. Trust me. What seemed important at a review, may seem less important over time if it doesn't fit these two criteria. I often purposely ignore changes that are not critical to the client and not critical to me to see if opinions will soften over time. It saves work and expense.

How do you prioritize change?

Wednesday, June 5, 2013

Adobe Captivate 7 - Now or Later?

By Jonathan Shoaf

I've always been a software junkie. I'm happy to spend some money on a software product when I know it will save me hours of effort over the course of the next year. So when new software comes out, I'm like a kid at Christmas opening up the gift to see if I got what I wanted.

These days, Adobe is the software vendor I'm using the most. I use the Adobe Master Suite and Adobe Captivate for many of my projects. So when Adobe Captivate 7 was released, I was eager to unwrap the gift. While I still need to use it for a few projects to give it a full review, I'd like to share some of my initial thoughts. This is not meant to be a comprehensive list of the new features...just enough to answer the question:

Do I upgrade now or later?

The new release is the same Adobe Captivate you already know. If you are familiar with Captivate 5 and 6, it will be an easy transition to Captivate 7. There are new features and improved functionality, but don't expect an overhaul on the user interface.

Adobe is continuing to strongly support Microsoft PowerPoint. Many of the instructional designers I work with love this feature. It allows them to use a tool they are familiar with to lay out content and simply import it into Captivate. Once in Captivate, they can provide the additional functionality they need or pass it to a developer for advanced interactivity.

I'm careful about adding pre-built interactions to my projects. That said, Adobe has added some new interactions to its library. While the YouTube video streaming is not really an option for me (and my company), the new learning notes, and in-course web browsing could be useful. There is also some new features for creating drag and drop interactions.

New with version 7 is support for Tin Can. While I'm excited about this, I imagine it will be a long while before I have an LMS that will support this. If I did, this would be a good reason to upgrade.

The Adobe Captivate app packager is another reason I would consider upgrading...except that I mostly support Windows 7 computers using IE8 or IE9. (blah, I know!) That said, many folks will appreciate this if they need to support a variety of mobile platforms.

There is a new shared advanced actions feature that I'm looking forward to fully evaluating. I use advanced actions a lot. In fact, I keep wishing Adobe would update the user interface to advanced actions. In this release they've added the ability to reuse advanced actions more easily through templates.

There are some other new features that may be useful such as additional question types for HTML5, support of GIFT format for question banks, enhanced accessibility features, improved audio recording and editing, an equation editor, and a Twitter widget.

I've peeked under the wrapping paper...and, I'm glad to see something I know and love improved. So...do I upgrade now or later?

I don't have the urge to upgrade to it this very moment. There are no major time savers for me in this release. However, this may not be true for you. For example, there are certainly time saving features for supporting mobile platforms and HTML5 users.

Are you an Adobe Captivate user? Will you upgrade to Captivate 7 now or later?

Wednesday, May 15, 2013

How to Let Learners Make Mistakes in eLearning


By Shelley A. Gable

A few years ago, I was a co-researcher on a study that investigated the factors that influence informal workplace learning. The literature on the subject frequently references learning from mistakes as a typical form of informal learning.

So how can we leverage this natural way of learning in eLearning lessons?

Nudge learners to assess their responses. I recently saw this in an eLearning lesson a colleague created. The lesson prompted learners to answer a scenario-based question. After submitting the answer, an initial round of feedback suggested a couple of factors learners should have considered when responding and asked them to assess whether their responses were on the right track. Learners then had an opportunity to modify their responses or continue. This seemed like a clever way to prompt learners to reflect on their learning and potentially recognize mistakes themselves.

Show the consequences of decisions. Suppose an eLearning lesson teaches sales skills, and a scenario-based question challenges learners to present a product’s benefits to a customer. Instead of simply telling learners whether they presented the benefits correctly or incorrectly, follow their response with how the customer replies (perhaps with a customer who expresses interest, or a reluctant no, or a stern no, for example). Then, you might ask learners to assess why the customer reacted the way he did, and/or challenge learners to use a better response to attempt to recover the situation (which is similar to what someone might think through in this type of situation in real life).

Activate incorrect paths in system simulations. I’ve encountered two main types of system simulations. One type is immersive, allowing learners to click around and explore in a simulated re-creation of a software application (or a portion of it). Another type consists of a linear path through a specific series of steps.

When creating the latter, consider easing up on the linear aspect of it. Instead, you might activate a limited number of incorrect paths that branch from the intended path. To control the cost and time required to create a branching simulation, you can opt to only allow learners to stray a few steps away from the correct path. If a learner doesn’t self-correct before reaching the end of what you opt to allow, you might display feedback that helps learners recognize what they’ve done incorrectly and/or identify the misunderstanding that may have led them astray.

With an approach like this, learners benefit from learning from their mistakes through branching, and you can still control the cost and time required to build the simulation by limiting the extent of the branching allowed.

Do you give learners opportunities to make mistakes?

If so, how did you identify what types of mistakes to allow? And how did you design those opportunities into the training? Please share!