Sunday, June 1, 2014

Making a Game Out of Software Simulations

By Shelley A. Gable

I recently worked on a lengthy course that included a lot of software training. The intent of the training was not only to introduce learners to the software, but also to build fluency with several key tasks.

Of course, building fluency requires practice. So, one of the challenges with designing the training was figuring out how to provide the repetition needed to build fluency, without it feeling repeatedly boring for the learners.

To make it fun, we combined a handful of simulations into a web-based game.

Here are some of the elements we designed into the game…

Backstory. The game starts with a playful, fictional backstory, which provides a reason for needing to complete the selected tasks in the software quickly and accurately.

Missions (i.e., scenarios). For each task we needed to test learners on, we created a mission. The “mission” is basically the scenario for completing the task. A scenario that aligns with the game’s backstory. After getting through the backstory, learners encounter a menu of missions, which they can tackle in any order. To conquer the game, learners must conquer each mission.

In order to conquer a mission, learners complete the corresponding task within a specified amount of time and without exceeding an allowed number of mistakes (i.e., misclicks). We established the time limits by testing the missions with experienced users – we captured their times to complete the missions, and then we padded the times a bit to identify challenging yet attainable standards for learners who are new to the software.

Additionally, after completing the main part of a mission, the game invites learners to complete a “bonus” version of the mission. The bonus mission tests the same task, but with some added twist that makes it more advanced.

Feedback. Learners receive feedback after each mission attempt, based on their performance in that mission. The game presents the feedback in a way that fits the theme of the backstory.

Progress bar. The game includes a progress bar, which advances as learners complete missions successfully. The bar becomes progressively fancier as learners approach the end of the game.

Here’s how it’s working out so far…

Right now, learners complete the game independently, as an activity in classroom-based instructor-led training. Early feedback has been mostly positive. Learners enjoy the game, feel motivated to complete the missions, and engage in friendly competition with their classmates by comparing completion times by mission.

We anticipate working on another iteration of the game down the road, and we hope to find a way to incorporate a leader board to further foster friendly competition, especially if the game is eventually used by remote learners.

Have you enhanced a learning experience with games?

If so, how? What were the performance and instructional needs? How was a game able to help you meet those needs? And what were some of the components of your game?

Tuesday, August 6, 2013

Remember Recency?

By Shelley A. Gable

If you haven’t encountered it lately, it’s possible you’ve forgotten about the recency theory of learning.

Recency is the tendency to be more likely to remember information from the end of a sequence. Cognitive theorists believe that as new information enters the working memory, earlier information is pushed out. Since the information entering at the end doesn't get pushed out as quickly, the brain has more time to process and remember the later stuff.

Why does recency matter for eLearning?

I’ve seen many eLearning lessons end with reiterating a lesson’s objectives. This seems to miss the opportunity to take advantage of the recency effect. Instead, we can end eLearning lessons in ways that prompt learners to recall important information or have a meaningful moment of insight.

How can we take advantage of the recency effect?

Consider these simple approaches to concluding lessons in a way that reinforces critical knowledge and/or prompts relevant reflection…

A fill-in-the-blank slide. A really simple approach I’ve seen is to simply end an eLearning lesson with a slide that restates some of the critical information from the training, perhaps with blanks learners must fill in to prompt them to recall (and further process) that knowledge themselves. You could ask learners to fill in blanks in a bulleted list of text. Or, you could have them fill in blanks in a diagram, table, or comparative matrix.

Reflective questions to connect concepts. Another simple approach is to create a slide with a few reflective questions about the content. The questions might challenge participants to make connections between the lesson’s content and related content from earlier in training. Or, you might pose questions that ask learners how the lesson’s content supports the organization’s values (if there is a clear set of values the organization actively promotes). You could also ask learners to list specific situations in which they will apply the lesson’s content to their jobs, or how the content will help them become more successful in their jobs.

Confidence check. You might end an eLearning lesson with a slide that prompts learners to rate their level of confidence in applying newly learned knowledge to their jobs. With this approach, you might follow up with questions that prompt them to list aspects of the content that were especially easy and/or challenging. For lower confidence scores or challenging aspects of the content, you can ask learners to identify ways they can further develop those skills to improve their confidence.

Social accountability. You could take any of the approaches described above and create a sense of social accountability for learners by asking them to share their responses using some form of social media, such as internal wikis or discussion boards. Alternatively, the training might include an expectation to discuss summative learnings and reflections with a manager or trainer within a specified timeframe.

How do you take advantage of recency?

What do you typically put on the final slide of an eLearning lesson? Do you use it to take advantage of the recency effect? If so, please share examples in the comments!

Wednesday, July 31, 2013

Focus Time and Effort with the 80/20 Rule

By Jonathan Shoaf

The 80/20 rule, also known as the Pareto Principle, roughly states that 80% of the results are caused by 20% of the effort. This rule is applied commonly in business situations where for example, 80% of your income comes from 20% of your clients. This principle is meant to be a rule of thumb to guide decision making.

As a software developer, I use this principle.  In many cases 80% of the user's desired outcomes can be accomplished by 20% of the application. I've always believed the development process for software applications and e-learning have a lot in common. In particular, time and cost must be balanced with functionality and results.

The Pareto Principle can be used to help focus time and effort to get the outcomes most desired. Don't have time to sit in 100% of the meetings? Identify the 20% of the meetings that cover 80% of the results and spend the most time analyzing those meetings. The subject matter expert doesn't have a lot of time to give on the project? Ask them to identify the 20% that needs to be learned to cover 80% of the outcomes.

I'm not saying to ignore the other 80% that is needed to fully cover a topic. However, I am saying there are realities that may keep you from being able to spend the time you need on a topic. Identify and invest in the 20% and your learners will be prepared for 80% of the outcomes.

Here's an example of where training often fails the 80/20 rule. A new software application is implemented at your organization. You are expected to train on the application.

The vendor provides training content and you are to convert it to training. Do you know where that content comes from? Here's the process:

Functional specifications are created for a software product. These specifications cover every thing the software is functionally able to do. What the software can do is not what the user necessarily needs to do. Following the Pareto Principle, the user may only need to use 20% of the software to accomplish 80% of the tasks.

The functional specifications are turned into help and documentation. Again, covering nearly 100% of what the software can do. What the users need to do? That's still not identified.

Next the training is produced. This is where failure often occurs. Training is created based on the documentation from the vendor. The thinking is that everything needs to be covered. Its an easy trap to fall into. Considering the Pareto Principle, training poorly on 100% of the application is not as effective as training thoroughly on the most important 20% of the application.

Therefore, focus needs to be given on the 20% of the software application the learner will use to create 80% of the outcomes.

Do you apply the 80/20 rule during the instructional design process?

Tuesday, July 16, 2013

Two Simple Rules for Evaluating E-Learning Project Changes

By Jonathan Shoaf

Let's face it, most requests for e-learning are vague at best. The client wants an e-learning about a particular topic, they put some PowerPoint slides together with lots of words and bullets (and no graphics!) and say "turn this into e-learning." Although the client will not admit it, they are thinking they'll figure out as the project goes along. This is why its important to have a development process.
  1. Background
  2. Project Description & Scope
  3. Storyboard
  4. Prototype
  5. E-learning
The earlier in this process you "figure it out", the less amount of work the developer needs to do and the less cost to the client. The goal is to work out the big hairy important details early. Later on in the process you want to be tweaking the details and not making major changes.

When change comes you will need to manage it and keep it from sabotaging the project. Handle all of the changes and the expense goes up leaving the client unhappy.  If you do not handle enough of the changes the client feels like they are losing control of the project to the developer.

I have found two rules to follow for prioritizing change in a project.  You can apply these when you see too much change coming and you need to sort out what changes to implement first.

1. If the client says it is important, the change should be at the top of the list.

You're not the client. You don't know why its important but the client does. The client will not be adament about something unless they have reason to be. If they are being a stickler about a change, ignore at your own peril. The reasons for the change can range from past mistakes made, past feedback given, company culture, or a better understanding of the learners. These are things the client knows but you don't.

If the client says it is important, then make the change. It can go a long way to building a relationship of trust between the developer and client.

2. If you think it is important, the change should be the next item on the list.

The client is relying on you to be the e-learning expert. They are not. You may know why a change is important, but the client does not. The reason it is important to you may include your understanding of how learners interact with e-learning, your understanding of bandwidth issues, your understanding of how the change impacts the clients most important requirements, your experience with iPads versus desktop computer, and more. Trust yourself. The client will learn to trust you.

The rest of the changes are less important. Trust me. What seemed important at a review, may seem less important over time if it doesn't fit these two criteria. I often purposely ignore changes that are not critical to the client and not critical to me to see if opinions will soften over time. It saves work and expense.

How do you prioritize change?

Wednesday, June 5, 2013

Adobe Captivate 7 - Now or Later?

By Jonathan Shoaf

I've always been a software junkie. I'm happy to spend some money on a software product when I know it will save me hours of effort over the course of the next year. So when new software comes out, I'm like a kid at Christmas opening up the gift to see if I got what I wanted.

These days, Adobe is the software vendor I'm using the most. I use the Adobe Master Suite and Adobe Captivate for many of my projects. So when Adobe Captivate 7 was released, I was eager to unwrap the gift. While I still need to use it for a few projects to give it a full review, I'd like to share some of my initial thoughts. This is not meant to be a comprehensive list of the new features...just enough to answer the question:

Do I upgrade now or later?

The new release is the same Adobe Captivate you already know. If you are familiar with Captivate 5 and 6, it will be an easy transition to Captivate 7. There are new features and improved functionality, but don't expect an overhaul on the user interface.

Adobe is continuing to strongly support Microsoft PowerPoint. Many of the instructional designers I work with love this feature. It allows them to use a tool they are familiar with to lay out content and simply import it into Captivate. Once in Captivate, they can provide the additional functionality they need or pass it to a developer for advanced interactivity.

I'm careful about adding pre-built interactions to my projects. That said, Adobe has added some new interactions to its library. While the YouTube video streaming is not really an option for me (and my company), the new learning notes, and in-course web browsing could be useful. There is also some new features for creating drag and drop interactions.

New with version 7 is support for Tin Can. While I'm excited about this, I imagine it will be a long while before I have an LMS that will support this. If I did, this would be a good reason to upgrade.

The Adobe Captivate app packager is another reason I would consider upgrading...except that I mostly support Windows 7 computers using IE8 or IE9. (blah, I know!) That said, many folks will appreciate this if they need to support a variety of mobile platforms.

There is a new shared advanced actions feature that I'm looking forward to fully evaluating. I use advanced actions a lot. In fact, I keep wishing Adobe would update the user interface to advanced actions. In this release they've added the ability to reuse advanced actions more easily through templates.

There are some other new features that may be useful such as additional question types for HTML5, support of GIFT format for question banks, enhanced accessibility features, improved audio recording and editing, an equation editor, and a Twitter widget.

I've peeked under the wrapping paper...and, I'm glad to see something I know and love improved. I upgrade now or later?

I don't have the urge to upgrade to it this very moment. There are no major time savers for me in this release. However, this may not be true for you. For example, there are certainly time saving features for supporting mobile platforms and HTML5 users.

Are you an Adobe Captivate user? Will you upgrade to Captivate 7 now or later?