How do we know we are making a difference?

by Nancy Bacon, Director of Learning & Engagement

Washington Nonprofits launched a new evaluation form. Is it:

  • Excellent
  • Good
  • Fair
  • Poor?

 

Washington Nonprofits runs workshops all over the state and online. People turn to us to learn, and we spend a lot of time training others to think about how they make a difference.

But how do we know if we are making a difference? How do we stay on top of the latest trends to make sure that we are on top of our game when it comes to learning and program delivery.

This September, Washington Nonprofits launched a new workshop evaluation form. We have changed it before with little fanfare, and it contained the usual Likert scale and open-ended questions about what you liked and didn’t like. Our new evaluation form is different. We are sharing the story to offer an approach that might give you food for thought on how you know how well your programs are making a difference.

It all began during the Learning, Technology & Design virtual conference in March 2017. Learning expert Dr. Will Thalheimer, author of Smile Sheets, challenged us to think about whether we really knew anything about what people learned based on our current evaluations. He demonstrated how Likert and other numeric rating scales fall short because your “strongly agree” is different from mine. We don’t know anything about who is taking the survey to know if “I learned a lot” means anything. Converting words like “excellent” into 5 points and then taking the average across all participants yields non-sensical data like “You were on average a 4.2 speaker.” We at Washington Nonprofits are very committed to learning from our programs and modeling best practice. The time had come to change up our own evaluation.

What we like about our new evaluation:

  1. The first question offers us a baseline understanding of what participants come to the workshop knowing. We didn’t have any systematic way of knowing in the past.
  2. The second question, when paired with the first, shows us the change that resulted. If someone who knows a good amount comes and learns something, that is great. We wouldn’t expect them to learn a lot. If someone comes knowing nothing, we would be disappointed if they just learning something, not a lot.
  3. Ultimately we really care about action. Of course we want people to learn something, but we really want them to have the connections, tools, knowledge, and confidence to take action. Question 3 lets us know whether we got them to a place of action and what else we might need to think about after the workshop to get them there.
  4. Learning alone does not move people to action. Change happens when people apply learning when people hold them accountable and they have access to further tools and information in real time. We want to know both if people have access to these supports and if they perceive that they have this access. The survey both gathers information for us and serves as a final reminder that they are not alone.

As you think about your program evaluations, you might download Dr. Thalheimer’s free Smile-Sheet Diagnostic. Read more about implementing these ideas in a blog by Washington-based learning trainer Brian Washburn.

And let us know what you think about our new evaluation!