Using Story in Evaluation

Evaluating programs through the use of stories is a powerful way to personalize the experiences of those you serve for your funders and other stakeholders.  And although it’s often more work, the experience of gathering stories and reflecting on them allows you to truly understand a service up close and personal from your client’s perspective.

My colleague Kim and I were recently involved in an evaluation project that involved gathering ‘Most Significant Change’ (MSC) stories from individuals and service providers that had been impacted by a series of health care initiatives.  The stories were incredibly revealing.  Some were very raw.  They pointed to the harsh realities of dealing with persistent substance misuse and mental health issues, often rooted in past traumas.  They also highlighted the daunting and overwhelming challenges of trying to navigate systems that aren’t designed to be easy or accessible to anyone other those with a lot of social or financial capital.

The experience of gathering and then writing and presenting the stories brought me back to my dissertation work.  I utilized a qualitative research methodology called Grounded Theory to try to understand how leadership in social service programs impacts the outcomes that individuals achieve through those programs.  As I wrote about in another blog post, it ended up being about trust and autonomy.  What I realized in reflecting on those experiences is that I have often gravitated towards numbers rather than stories in my work evaluating programs.  For the sake of convenience and to meet funder requirements, it’s been easy to just gather up survey data or data on service utilization patterns and then tell a narrow story of what happened.  Utilizing qualitative methodologies to gather stories takes more time, but I believe has amazing payoffs in terms of gaining a more in-depth understand of patterns.

In the end, I think the best evaluations actually use both quantitative data and stories. Some practical tips that will make it easier for you add in stories to your evaluation process include;

  • Use an outsider to gather the stories.  Program staff or management will bring their own lens to the process that may not be helpful, and those being interviewed may feel some social pressure to answer in certain ways.  If having an outside evaluator isn’t in your budget, you can use staff from other programs or students from a local social service program.
  • Consider what questions might help round out or more fully tell a story about the different aspects of a program or service.  For example, you can ask questions about what people felt was the most powerful or beneficial aspect of a program and use that to bring life to quantitative outcomes data from surveys.
  • Keep the sample small but diverse.  You likely won’t need a massive number of stories to gain a more in-depth understanding as long as you sample for diversity in terms of people’s characteristics (e.g., age, gender identity, cultural background) or their experiences with services (e.g., those that completed the program vs. those that didn’t).
  • Choose your method for gathering the data carefully.  Interviews typically provide richer data, but can be more work than using a focus group methodology.  If you wish to use focus groups, I would consider adding a few interviews as well.  And while phone interviews will suffice, face-to-face is best if you can manage it.
  • Ask permission to record the interview or focus group.  By recording, you free up your mental energy to really focus on what people are saying and ask good follow-up questions.  Promise people that you are only using it to make sure you get the story right and that you will both maintain it securely and destroy it once you’re done.  The recording will allow you to capture verbatim quotes, which can be unbelievably powerful in storytelling.
  • Open up and lean in to the experience!  I’ve found that listening and really engaging with other people’s stories can be profound and enriching.  It’s a real honour to bear witness to people’s stories.  They often feel incredibly grateful for being listened to – particularly people who have been made voiceless through marginalizing experiences.  And those stories can help propel your program forward in its growth and evolution towards better outcomes for those very individuals.


Goals, Goals, Goals!!!

I’m headed to Vancouver in two weeks to present at the revamped CARF Canada Advanced Outcomes Training.  That training event includes a discussion about the use of client goals to measure program outcomes, so I thought I would get my rant about the limits of that approach out of the way ahead of time!

The overwhelming majority of programs and services I’ve been involved in evaluating over the past several years use some form of client goal achievement to measure their success. I get the attraction. It’s a ‘two-fer’ for many programs! Staff have to define goals for the work they do with clients as part of case management and program accountability expectations (e.g., accreditation), so why not get some extra mileage by using them for outcomes measurement? But the devil is in the detail. Most of the programs I’ve worked with have taken advantage of software that has some form of goal scaling built in. Many software programs (and most in-house solutions) simply require users to indicate whether a goal has been fully achieved, partly achieved, or not achieved at some point in time after the goal is set. Some provide opportunity to indicate why it was achieved or not achieved. There are few (if any) parameters around what achievement means or what a reasonable timeframe for full achievement might be. The system then produces a report counting how many goals are achieved (or not) and links that to program level outcome statements based on categories of goal type that the worker chooses when entering the goal. [Read more…]

© 2023 WRH Consulting - Website by Working Design -