If you’re in the Design phase of a training project, you’re likely formulating performance objectives and selecting instructional methods.
If you follow Robert Mager’s model for writing objectives, they likely contain three elements (not necessarily in this order):
- Behavior: The observable behavior a learner must perform on the job
- Condition: The circumstances in which the learner must perform the behavior (consider workplace conditions and available resources for the task)
- Criterion: The standard the learner must meet in performing the behavior (think quality measures, speed, quantity, etc.)
Here is an example of these elements in play:
Given an image request and the archive system [conditions], recommend three images [behavior] that meet at least 75% of the criteria in the request [criterion].
This objective mirrors expected performance on the job. At this company, when the image librarian receives an image request (e.g., for a brochure or catalog), the librarian is expected to provide three options that meet at least 75% of the criteria indicated in the request.
But the goal of this post isn’t to provide a crash course on writing performance objectives. Instead, I’d like to take a closer look at objective criteria.
What is an acceptable criterion for a performance objective?
Many of the resources that explain how to write objectives suggest that a criterion should be specific and objectively measurable. Ideal candidates include just about any measure you can associate with a number – defect levels, speed/time, quantity quotas, etc. Like in the example above.
This makes perfect sense. But most of my projects include several objectives with behaviors that aren’t directly countable.
In the grand scheme of things, those behaviors impact key performance indicators, which we can assess as part of evaluating training’s effectiveness…but I still have some behaviors not clearly associated with a statistic.
This is often (though not always) the case with soft skills.
So do you drop the criterion?
Definitely not! (that heading was a trick)
Instead, attempt to briefly describe what the correctly performed behavior “looks like” or what it should accomplish.
For example: Given a scenario, ask questions that result in identifying the attendee’s reason for canceling the registration.
Though not associated with a particular statistic, one can observe whether the learner was able to identify the cancellation reason as a result of asking the right questions.
Here’s another example: Given a spec sheet and a scenario, create benefit statements that relate a computer to customer needs.
Admittedly, this requires someone to judge whether the benefit statement connected to customer needs, which leaves a bit of room for inconsistency; however, I’d argue that it still provides a reasonable standard for assessing success.
What kind of performance criteria do you use in objectives?
In instances where hard numbers are available for assessing a behavior, it makes sense to use that as the criterion. But how do you write an objective when the criterion is more qualitative than quantitative? As always, we’d love to see your opinions and examples!
No comments:
Post a Comment
Thank you for your comments.