By Shelley A. Gable
Think back to the last time you were in training, whether it was online or in the classroom. What pieces stand out most? While hopefully there are memorable moments from throughout the course, you can probably also recall how the course started and ended.
Our tendency to remember especially well how sequences of information or events start and end is explained by primacy and recency effects. Below is a simple explanation of each.
Primacy effect is the tendency to be more likely to remember information from the beginning of a sequence (e.g., the beginning of a course) compared to information later in the sequence. Cognitive theorists suggest that at the start of a course, there is not yet a lot of information being processed in working memory, thus allowing the brain to process and remember that early information more easily.
Recency effect is the tendency to be more likely to remember information from the end of a sequence. Cognitive theorists believe that as new information enters the working memory, earlier information is pushed out. Since the information entering at the end doesn't get pushed out as quickly, the brain has more time to process and remember that later information.
The image below, borrowed from Wikipedia, illustrates how much we tend to remember at various points while taking in a sequence of information.
What does this mean for eLearning?
To put it simply, we need to make the bookends of a course as meaningful and content-oriented as possible.
We know that we should start with something that grabs the learner's attention, but many eLearning courses I've seen begin with a list of learning objectives or a very general introductory paragraph (which isn't very exciting to most learners). Instead, the course should start with a clever attention-getter that previews some of the key content.
For example, a customer service lesson might start by challenging learners to answer a customer question that they'll learn more about during the lesson. Learners might then be provided with some of the content needed to answer the question, along with an indication that they'll learn more details soon. An introductory exercise like this can grab learners' attention in a relevant way, serve as an advance organizer for the content, and may make a list of learning objectives that follow seem more interesting. And, this type of opener allows you to leverage the primacy effect by creating a clear link from the first moments in the course to key content that's likely buried somewhere in the middle.
The same principle can be applied to the end of a course. Avoid ending it with a list of learning objectives, often introduced with a stem sentence along the lines of "Now that you've completed this training, you should be able to..." This isn't to say that a list like this shouldn't be included, but it may be worthwhile to avoid making this the very last thing the learner sees in the course. Instead, you might consider ending with a demonstration of how the content should be applied. Or better yet, a final activity in which learners apply the content themselves.
Of course, there are many ways to make the very start and end of an eLearning course meaningful and relevant, so that the concepts of primacy and recency can be leveraged to help learners recall content from the middle of the course. If you happen to have a specific example you've employed, please share!
Tuesday, January 26, 2010
Thursday, January 14, 2010
Is eLearning As Credible As Classroom Training?
By Shelley A. Gable
I recently came across a discussion on LinkedIn that debated whether classroom (instructor-led) training is more credible than eLearning. After comparing the effectiveness of the two methods in a variety of studies, many researchers have concluded that they can be equally effective and that the instructional strategies drive effectiveness, not the medium.
As training professionals, we have the knowledge to make informed decisions about whether to use classroom training, eLearning, or another approach to meet an instructional need. However, if a project's stakeholders perceive eLearning as less credible than classroom training, then it's to our advantage to anticipate why these perceptions may exist and prepare ourselves to address them.
In the LinkedIn discussion thread, training professionals suggested a variety of reasons that classroom training may be perceived as more credible than eLearning. Below are some of those reasons, along with suggested discussion points for addressing them with project stakeholders.
Classroom training is more familiar.
This makes sense. Nearly all of us learned in a classroom setting in school, and most of us continued to do so in college. In contrast, the majority of us have probably had a lot less exposure to eLearning. Therefore, the key to overcoming this obstacle may be to help stakeholders visualize how the proposed eLearning course would work. What resources will be available to learners? How can they get their questions answered? How will learners practice newly learned skills and receive feedback? How will performance be assessed?
Classroom training is more interesting and keeps learners more attentive.
What makes classroom training interesting? An energetic presenter? Perhaps this effect can be mimicked by incorporating audio narration or an occasional video. Slides that are visually appealing and content that is written with some character can help too. The ability to interact with peers? This was addressed above. The variety of activities? There are a lot of options available in our eLearning toolkits too. If a stakeholder suggests certain classroom activities that should be included in training, brainstorm equivalent activities for the eLearning environment.
I recently came across a discussion on LinkedIn that debated whether classroom (instructor-led) training is more credible than eLearning. After comparing the effectiveness of the two methods in a variety of studies, many researchers have concluded that they can be equally effective and that the instructional strategies drive effectiveness, not the medium.
As training professionals, we have the knowledge to make informed decisions about whether to use classroom training, eLearning, or another approach to meet an instructional need. However, if a project's stakeholders perceive eLearning as less credible than classroom training, then it's to our advantage to anticipate why these perceptions may exist and prepare ourselves to address them.
In the LinkedIn discussion thread, training professionals suggested a variety of reasons that classroom training may be perceived as more credible than eLearning. Below are some of those reasons, along with suggested discussion points for addressing them with project stakeholders.
Classroom training is more familiar.
This makes sense. Nearly all of us learned in a classroom setting in school, and most of us continued to do so in college. In contrast, the majority of us have probably had a lot less exposure to eLearning. Therefore, the key to overcoming this obstacle may be to help stakeholders visualize how the proposed eLearning course would work. What resources will be available to learners? How can they get their questions answered? How will learners practice newly learned skills and receive feedback? How will performance be assessed?
Classroom training gives learners hands-on practice and feedback from the trainer.
One participant in the LinkedIn discussion thread suggested that many people think that eLearning is just reading. As with the item above, it may help to provide stakeholders with specific examples of how learners will apply newly learned skills and receive feedback on their performance in the eLearning environment. Will the training include scenario-based knowledge checks? And if so, what kind of guiding feedback will learners receive if they answer incorrectly? Will the training challenge learners to complete a particular task (perhaps in a simulation)? And again, what types of feedback and guidance will be provided? If possible, it might be especially helpful to share completed examples from other projects with reluctant stakeholders.
One participant in the LinkedIn discussion thread suggested that many people think that eLearning is just reading. As with the item above, it may help to provide stakeholders with specific examples of how learners will apply newly learned skills and receive feedback on their performance in the eLearning environment. Will the training include scenario-based knowledge checks? And if so, what kind of guiding feedback will learners receive if they answer incorrectly? Will the training challenge learners to complete a particular task (perhaps in a simulation)? And again, what types of feedback and guidance will be provided? If possible, it might be especially helpful to share completed examples from other projects with reluctant stakeholders.
Classroom training allows peers to learn from one another.
eLearning can do this too. Many organizations use web 2.0 technologies to accomplish this. A previous post on this blog, Understanding Web 2.0, offers a crash course on this topic. For organizations that haven't embraced that technology yet, there are also low-tech ways to do this - click here to read a previous post on this blog that offers a few low-tech suggestions.
eLearning can do this too. Many organizations use web 2.0 technologies to accomplish this. A previous post on this blog, Understanding Web 2.0, offers a crash course on this topic. For organizations that haven't embraced that technology yet, there are also low-tech ways to do this - click here to read a previous post on this blog that offers a few low-tech suggestions.
Classroom training is more interesting and keeps learners more attentive.
What makes classroom training interesting? An energetic presenter? Perhaps this effect can be mimicked by incorporating audio narration or an occasional video. Slides that are visually appealing and content that is written with some character can help too. The ability to interact with peers? This was addressed above. The variety of activities? There are a lot of options available in our eLearning toolkits too. If a stakeholder suggests certain classroom activities that should be included in training, brainstorm equivalent activities for the eLearning environment.
A few final thoughts...
I'm guessing that all of us have encountered eLearning courses that have been little more than text-filled page-turners. Unless the topic is intrinsically interesting, this approach tends to be boring and ineffective. I suspect that it's experience with courses like this that makes some folks reluctant to consider eLearning. But I've also attended classroom training that consisted of little more than a facilitator reading from a seemingly endless series of slides, which has a similar effect as a page-turner on the computer.
eLearning isn't optimal for every instructional need. But when an eLearning solution does make sense, be prepared to explain to reluctant stakeholders how the instructional strategies that make classroom training effective can also be applied in an eLearning environment.
Wednesday, January 6, 2010
Aligning Training to Performance Objectives...We All Do This...Right?
By Shelley A. Gable
You've analyzed the client's needs. The client's goals are clearly defined. You've assessed where the client is currently, relative to those goals. In other words, you've identified the performance gap that training is supposed to close. Oh, and of course, you've determined that training is in fact an appropriate way to close that gap.
Once the scope of your project is clearly defined, you march ahead to identify the performance objectives for the training. This isn't the same as drafting up an outline of topics to include in training. Performance objectives are more specific. These objectives are a list of observable behaviors that learners should be able to perform upon the completion of training. Unlike a basic topical outline, these objectives create specific expectations for which everyone involved in the project is accountable.
If you have a formal background in training, the concept of performance objectives is nothing new. However, many of the training folks I know moved into training-related positions without a formal background in the field. Not a bad thing by any means...these folks bring a fresh perspective and other varied talents to the table. But a crash course in performance objectives could be helpful (for all of us really, even as a review). So here we go!
How do performance objectives enhance accountability?
As explained earlier, a training outline of performance objectives allows for more specific accountability than a training outline of topics. This accountability applies to the instructional designer, the learners, the managers...and well, just about everyone involved in the project.
Consider the comparison below. This example is taken from a segment of an e-learning lesson about handling customer complaints. The audience consists of customer care representatives in a call center environment for a beauty product company.
While the topics on the left give you a general sense of what will be covered in training, the objectives on the right give you specific targets for developing focused presentations, meaningful practice activities, relevant assessments, and straight-forward on-the-job evaluations.
How does this enhance accountability?
Now let's dissect a performance objective.
The work of Robert Mager is often referenced when discussing performance objectives. In following Mager's model, an objective should consist of three elements: conditions, behavior, and criterion.
Conditions
Behavior
Criterion
Let's take a look at how these pieces are combined in Objective #3.
Writing precise performance objectives may be more time-consuming than drafting a topical outline; however, this exercise is well worth the time investment. Since the elements of these objectives drive the design of instructional activities and assessments, the time spent on this step should easily be made up by the time saved later when developing instructional activities and assessments.
You've analyzed the client's needs. The client's goals are clearly defined. You've assessed where the client is currently, relative to those goals. In other words, you've identified the performance gap that training is supposed to close. Oh, and of course, you've determined that training is in fact an appropriate way to close that gap.
Once the scope of your project is clearly defined, you march ahead to identify the performance objectives for the training. This isn't the same as drafting up an outline of topics to include in training. Performance objectives are more specific. These objectives are a list of observable behaviors that learners should be able to perform upon the completion of training. Unlike a basic topical outline, these objectives create specific expectations for which everyone involved in the project is accountable.
If you have a formal background in training, the concept of performance objectives is nothing new. However, many of the training folks I know moved into training-related positions without a formal background in the field. Not a bad thing by any means...these folks bring a fresh perspective and other varied talents to the table. But a crash course in performance objectives could be helpful (for all of us really, even as a review). So here we go!
How do performance objectives enhance accountability?
As explained earlier, a training outline of performance objectives allows for more specific accountability than a training outline of topics. This accountability applies to the instructional designer, the learners, the managers...and well, just about everyone involved in the project.
Consider the comparison below. This example is taken from a segment of an e-learning lesson about handling customer complaints. The audience consists of customer care representatives in a call center environment for a beauty product company.
# | Topical Outline | Performance Objectives |
1 | Probing questions | Given a customer complaint and a job aid, correctly identify three probing questions to ask to learn more the nature of the complaint. |
2 | Resolving complaints | Given a customer complaint and a job aid, recommend an appropriate resolution for the complaint. |
3 | Documenting complaints | Given a customer complaint and the processing system, complete all required fields to document the complaint within 60 seconds. |
While the topics on the left give you a general sense of what will be covered in training, the objectives on the right give you specific targets for developing focused presentations, meaningful practice activities, relevant assessments, and straight-forward on-the-job evaluations.
How does this enhance accountability?
- An instructional designer is accountable for ensuring that those specific behaviors are practiced and assessed during training. If learners cannot perform these objectives on the job, it can be relatively easy to determine whether specific behaviors were thoroughly covered during training.
- A learner that performs a specific behavior in a specific way in training can generally be expected to repeat that behavior in that way on the job.
- A manager that is aligned with the performance objectives should know exactly what to observe in employees in order to provide the on-the-job coaching needed to reinforce training.
Now let's dissect a performance objective.
The work of Robert Mager is often referenced when discussing performance objectives. In following Mager's model, an objective should consist of three elements: conditions, behavior, and criterion.
Conditions
- Definition: The circumstances under which the behavior will be performed on the job.
- Example: What resources are available to the learner? Objective #1 suggests that the learner will have access to a job aid to help identify probing questions based on the complaint. Objective #3 suggests that the learner will have to use the processing system to document the complaint.
- So What?: You need to identify on-the-job conditions in order to simulate those conditions during training. If a learner is expected to access a job aid on the job, then the learner should be prompted to access and interpret that job aid repeatedly in training (rather than duplicating that information on the slides of the e-learning lesson). If a learner is expected to complete a procedure in a computer system, then a scenario-based simulation of that procedure is an ideal instructional activity.
Behavior
- Definition: The observable behavior that must be performed on the job.
- Example: Objective #1 involves correctly identifying probing questions. Note that this objective does not address the soft skill of how to ask the question (e.g., tone of voice, transitioning to the question, etc.). It simply addresses how to identify the questions to be asked. Choose your words with precision; the verb chosen to represent the behavior (in this case, "identify") should accurately represent the behavior that must be performed.
- So What?: Defining the behavior with precision is critical, as this will drive how assessments and on-the-job evaluations are designed. While how probing questions are asked is important, the intent of this objective is to ensure that the learner can correctly identify what questions to ask. Therefore, there should be practice activities and assessment questions to specifically target this during training.
Criterion
- Definition: The standards that must be met when performing the behavior.
- Example: Should the behavior be completed within a certain amount of time (as in Objective #3)? Should it be completed in compliance with a particular law? Or with a specified level of accuracy? In all three objectives above, a 100% accuracy level is implied. When this is the case, it may not be necessary to explicitly state "with 100% accuracy" in the objective.
- So What?: Performance criteria should guide the type of feedback a learner receives during practice activities and how assessments are scored. For instance, a simulation intended to assess Objective #3 should be timed. Feedback should not only comment on the learner's ability to perform the behavior accurately, but also on the amount of time taken to complete the behavior.
Let's take a look at how these pieces are combined in Objective #3.
Given a customer complaint and the processing system [conditions], complete all required fields to document the complaint [behavior] within 60 seconds [criterion].
Writing precise performance objectives may be more time-consuming than drafting a topical outline; however, this exercise is well worth the time investment. Since the elements of these objectives drive the design of instructional activities and assessments, the time spent on this step should easily be made up by the time saved later when developing instructional activities and assessments.
Labels:
Evaluation,
Instructional Design,
Objectives
Sunday, January 3, 2010
Does Your eLearning Stick?
Someone recently gave me a copy of Dan and Chip Heath’s book 'Made to Stick: Why Some Ideas Survive and Others Die.' I’ve been impressed with how much learning theory is woven throughout on why certain ideas stick in a person’s head while others don’t.
The book is described as containing great lessons for marketers who want to devise the perfect jingle or ad pitch. But it also delivers many useful reminders and insights to those of us in the learning field. Really at the most simplistic root we have similar objectives to marketers anyway. We both want to enable a person to remember the key message and subsequently change his or her behavior.
So how do we create sticky learning? The Heath brothers say to tell a good story. This message and their subsequent tips connect really well with scenario-based learning theory and other related concepts. As I read, I found myself nodding in agreement or thinking after some examples, 'Nice suggestion, we’ll try that.'
If you’re looking for good ideas on how to improve your eLearning designs and make learning stick, the Heath book might serve you well. Their monthly column in Fast Company magazine also has good tidbits.
As an exercise to improve your own designs, look back through the eLearning courses you created this past year. Which ones are most memorable? You’ll probably find that these involved a storyline and were 'Made to Stick.'
The book is described as containing great lessons for marketers who want to devise the perfect jingle or ad pitch. But it also delivers many useful reminders and insights to those of us in the learning field. Really at the most simplistic root we have similar objectives to marketers anyway. We both want to enable a person to remember the key message and subsequently change his or her behavior.
So how do we create sticky learning? The Heath brothers say to tell a good story. This message and their subsequent tips connect really well with scenario-based learning theory and other related concepts. As I read, I found myself nodding in agreement or thinking after some examples, 'Nice suggestion, we’ll try that.'
If you’re looking for good ideas on how to improve your eLearning designs and make learning stick, the Heath book might serve you well. Their monthly column in Fast Company magazine also has good tidbits.
As an exercise to improve your own designs, look back through the eLearning courses you created this past year. Which ones are most memorable? You’ll probably find that these involved a storyline and were 'Made to Stick.'
Subscribe to:
Posts (Atom)