Practical Learning Surveys: Actionable Outcome

Practical Learning Surveys: Actionable Outcome
VectorMine/Shutterstock.com
Summary: How could we evolve surveys to provide data insights that are more performance-focused, behavior-predicting, and proactively actionable?

What You Ask And How You Ask It Both Matter

My previous article examined common pitfalls when designing learning surveys. This article builds on the tips to make survey data more actionable. Actionable data leads to actions. If you're not planning to do anything with the data you're collecting, then why collect data in the first place?

Look For Trends And Patterns, Not Individual Responses

Acting hastily on isolated comments may backfire! Including an open-text response at the end of the survey is common as a catch-all to allow users to tell anything about their learning experience. The potential mistake is to act on individual feedback instead of looking for patterns and trends.

Learners who are more opinionated or louder can cause expensive changes to a program without representing the majority of responses. For example, someone might say, "The learning experience should have been much shorter." Acting on this opinion alone may cause designers to remove the activities that made the learning effective in the first place, just to make it shorter.

We shouldn't rely on opinions; we should rely on data. What individual responders believe to be an effective way of learning (such as watching animated scripts about doing something instead of doing something, highlighting text, etc.) may not be the case. Learners may think they know more than they do, or believe they have higher skills than they do:

While people believe that they can identify effective teaching, they actually have limited knowledge of effective teaching practice. In the study, nearly all respondents believed that they were relatively skilled at identifying great teaching strategies, and more than 75 percent considered themselves above average in evaluating instructional practice. [1]

Look for patterns in the feedback (Artificial Intelligence can help) rather than individual responses. Remember, no matter how you form your questions about perception, this survey is still only about perception. It is not actual skills gained, or proficiency on the job observed. Don't make decisions solely based on Level 1 surveys. Use the insight as a piece in a puzzle that includes actual assessment results and observations on the job.

Designing More Actionable Surveys

To design surveys that go beyond participants' reactions and provide actionable insights, consider adopting an experiential approach, such as those detailed in Will Thalheimer's Performance-Focused Smile Sheets.

Do traditional Level 1 surveys predict learning outcomes? Apparently, they are not reliable indicators.

[...]four meta-analyses, covering over 200 scientific studies, find correlations between smile-sheet ratings and learning to average about 10%, which is virtually no correlation at all. [2]

Is there a value in smile-sheet ratings, then? Yes, but again, they should not be the only measure. In my experience, smile-sheet ratings detected disasters and large-scale inconsistencies. This has been especially true for Instructor-Led or Virtual Instructor-Led Training courses, where there was less consistency from the delivery perspective than in web-based training.

For example, when a course received lower-than-normal satisfaction scores, program managers looked at specific open-text responses for clues, as the evaluation was anonymous. So, while traditional smile-sheet ratings may detect significant issues, Learning and Development (L&D) still needs a diagnostic investigation to understand why.

How Could You Improve The Survey To Make It More Actionable?

By changing two things: what questions you ask (focus shift to forward-looking performance) and how you ask those questions (actionable options rather than generic Likert scale agreements).

Ultimately, the value of L&D is not in the visible content of learning experiences but in the invisible changes that happen to people. Unfortunately, we can't measure that invisible change. At least, not until we have chips in our brains. We can measure observable behavior, which is the application of learning. That's why our focus needs to be forward-looking and performance-driven.

Implementing Performance-Focused Survey Design

Will Thalheimer’s approach emphasizes aligning survey questions with the training's performance goals. Generic Likert scale questions are vague (not to mention treating some Likert scale answers as continuous data between 1 and 5 can lead to incorrect insights). The goal is to design specific questions that allow you to both gauge a meaningful metric and act on the results:

  • How able are you to put what you’ve learned into practice in your work? Choose the one option that best describes your current readiness.

a) I didn't learn anything new.
b) My situation is too different from the ones covered in the course.
c) I need more guidance before I know how to use what I learned.
d) I need more practice to use what I learned well.
e) I can be successful now in using what I learned (even without more guidance or experience).
f) I can perform at an expert level in using what I learned.

For example, if you learn that 45% of a cohort believes that they still need more practice to be successful on the job applying the skills they learned, you can work with your stakeholders to make sure you have a support plan for the skills transfer. Otherwise, it is unlikely that some (or all) of the 45% will not apply what they learned on the job.

Predicting Behavior Change With The Right Survey Questions

If the goal is behavior change, could we measure how well-prepared learners are for the behavior as a result of the learning experience?

Remember, a survey right after the learning experience is still perception, as we're not measuring actual skills, decision-making, and task competence. And so, we may not be able to tell how well-prepared they are, but maybe we can learn how well-prepared they feel.

For example, crafting the right survey questions may help predict the perceived behavior change by focusing on four key components: motivation (and intent to apply), opportunity to apply (physical and social), job capabilities (knowledge, skills, and abilities), and outcome (SMART-goal attainment). The beauty of data is that you can (and you should) test it for reliability, validity, and practicality.

The four elements (motivation, opportunity, job capabilities, and outcome) are just examples of a potential approach based on the COM-B model and the effect of goal-setting [3]. Another study experimented with combining the Integrated-Change model with Self-Determination theory for physical activity [4]. The point is that knowledge and skills are not enough for behavior change!

Understanding The Complexity Of Behavior Change

It's important to note that behavior change is a very complex challenge, and Learning and Development professionals shouldn't expect to drive it through training alone.

What Are The Most Effective Drivers For Behavior Change?

A recent study by the University of Pennsylvania found that interventions targeting individual determinants can be ordered by increasing impact from knowledge, general skills, general attitudes, beliefs, emotions, behavioral skills, and behavioral attitudes, to habits. For social-structural determinants, the order is legal and administrative sanctions, programs that increase institutional trustworthiness, interventions to change injunctive norms, monitors and reminders, descriptive norm interventions, material incentives, social support provision, and policies that increase access to a particular behavior​​ [5].

Conclusion

Crafting reliable and practical surveys for Level 1 evaluation requires careful consideration of question design and an experiential approach that focuses on performance outcomes. By avoiding common pitfalls and implementing principles from Will Thalheimer's Performance-Focused Smile Sheets, you can create surveys that gauge participant reactions accurately while also providing actionable insights. Exploring the intersection of behavior science and learning science with surveys and follow-up observations can teach us much more about human behavior's mysterious ways.

References:

[1] What Do People Know About Excellent Teaching and Learning?

[2] Learner Survey Results are NOT Correlated with Learning Results

[3] The COM-B Model for Behavior Change

[4] Combining the Integrated-Change Model with Self-Determination Theory: Application in Physical Activity

[5] Determinants of behaviour and their efficacy as targets of behavioural change interventions