Goals, Goals, Goals!!!

I’m headed to Vancouver in two weeks to present at the revamped CARF Canada Advanced Outcomes Training.  That training event includes a discussion about the use of client goals to measure program outcomes, so I thought I would get my rant about the limits of that approach out of the way ahead of time!

The overwhelming majority of programs and services I’ve been involved in evaluating over the past several years use some form of client goal achievement to measure their success. I get the attraction. It’s a ‘two-fer’ for many programs! Staff have to define goals for the work they do with clients as part of case management and program accountability expectations (e.g., accreditation), so why not get some extra mileage by using them for outcomes measurement? But the devil is in the detail. Most of the programs I’ve worked with have taken advantage of software that has some form of goal scaling built in. Many software programs (and most in-house solutions) simply require users to indicate whether a goal has been fully achieved, partly achieved, or not achieved at some point in time after the goal is set. Some provide opportunity to indicate why it was achieved or not achieved. There are few (if any) parameters around what achievement means or what a reasonable timeframe for full achievement might be. The system then produces a report counting how many goals are achieved (or not) and links that to program level outcome statements based on categories of goal type that the worker chooses when entering the goal.

So what’s wrong with all of that? To begin, there are a lot of untested assumptions built in to that approach. For example, it assumes that all goals are roughly equal in terms of their importance and the amount of time or effort required to achieve them. My experience in working with clients to set goals is that they often aren’t equal. This approach also assumes that all goals have a direct and meaningful link to the program’s goals. The problem here is that goals can often be small stepping stones towards some larger end. So, even if we trust that these individual goals bare some connection to the program’s goals, we end up counting several ‘successes’ (or failures) rather than simply counting the achievement of the real change or benefit we’re hoping for. And in the end, are those successes or failures a true reflection of our efforts and the efforts of our clients?

Using client goals to measure program success could also have unintended consequences for how our staff practice. By counting up the number of goals that are achieved or not achieved, we send the message to staff that this highly personal process has meaning at another level – evaluation of whether the program is working or not. The unintended consequence could be that staff focus their efforts on what is easily achievable (i.e., the low hanging fruit).

A good friend and colleague of mine often reminds me of an important principle in measuring program success; ‘measure me’. In other words, measure whether I, as a whole person, benefited from the program. Reporting on the percentage of goals that are achieved or not achieved is different than reporting on the percentage of clients that experienced a positive change in their life. Somewhere in that mess of goals are numerous clients with one or more goals of differing importance or significance and usually reflecting many steps towards some desired end. A good evaluation system should be able to measure and report on the changes that each unique client experiences.

The good news is that there are University tested and validated approaches to measuring program level outcomes through client goal achievement. These approaches, usually referred to as Goal Attainment Scaling (GAS), are more rigorous and require staff training. They are able to produce a standardized score for the individual that accounts for variation in the number of goals that clients have chosen to work on. They also define clear time limits and parameters for goal achievement. It is unfortunate that the versions I frequently see used are not based on the Goal Attainment Scaling model.

The bottom line? Goal planning is, and should be, a highly personal affair. Done correctly and with thoughtfulness, it is a fluid and reflexive process that grounds our day-to-day work. The fact that a goal isn’t achieved may be a good thing – perhaps a turning point in the client coming to terms with what their capacity is, or our staff realizing that they’re barking up the wrong tree. Likewise, the achievement of a goal may have had little to do with our efforts. Some things simply improve with time and sometimes people get better or solve their problems despite us! Adding up the results of this highly personal and reflexive process in the belief that it tells us something about program outcome achievement is problematic unless you take the time to build a very rigorous process. In the end, no system of outcomes measurement is perfect. All approaches have their pitfalls. But if you choose to use goal achievement, make sure you use a reliable and valid approach and provide training and support to staff so that they use it correctly and they understand that not achieving a goal can be a good thing!

Comments

  1. Susan Stanfield says

    Thanks, Warren! I couldn’t agree more. We are no longer using indiviual goal achievement as an indicator or agency effectiveness, for all the reasons you mention…issues of reliability, validity, distinguishing between outputs and outcomes (numbers of goals completed vs impact on the person’s life); and also I think it discourages dreaming bigger and setting ambitious goals, and promotes smaller, less ambitious, more attainable goal-setting. Do we want people to be setting and achieving unremarkable goals or setting their sights higher, imagining a great life rather than a satisfactory one?

    I would be interested to hear about other indicators people are using to measure agency performance.

    • Warren Helfrich says

      Thanks Susan. I’ve seen some organizations experimenting with tools that look more like survey instruments – getting baseline data in a number of domains (typically linked to Quality of Life in the disabilities sector) and then re-administering over time. This can also be linked to supports provided, so there is strong link between effort and outcomes. So… if we have a good sense of what supports we provided and their linkage to a specific outcome domain, we can use repeated measures to get a sense of the direction and magnitude of change over time.

Speak Your Mind

*

© 2021 WRH Consulting - Website by Working Design -