Goals, Goals, Goals!!!

I’m headed to Vancouver in two weeks to present at the revamped CARF Canada Advanced Outcomes Training.  That training event includes a discussion about the use of client goals to measure program outcomes, so I thought I would get my rant about the limits of that approach out of the way ahead of time!

The overwhelming majority of programs and services I’ve been involved in evaluating over the past several years use some form of client goal achievement to measure their success. I get the attraction. It’s a ‘two-fer’ for many programs! Staff have to define goals for the work they do with clients as part of case management and program accountability expectations (e.g., accreditation), so why not get some extra mileage by using them for outcomes measurement? But the devil is in the detail. Most of the programs I’ve worked with have taken advantage of software that has some form of goal scaling built in. Many software programs (and most in-house solutions) simply require users to indicate whether a goal has been fully achieved, partly achieved, or not achieved at some point in time after the goal is set. Some provide opportunity to indicate why it was achieved or not achieved. There are few (if any) parameters around what achievement means or what a reasonable timeframe for full achievement might be. The system then produces a report counting how many goals are achieved (or not) and links that to program level outcome statements based on categories of goal type that the worker chooses when entering the goal. [Read more…]

Building Innovation Muscle

I’ve written before about the role of best practices and my skepticism that an overly narrow focus on identifying and replicating them is the best and most effective way to address client needs. I recently read a book titled “High Performance Nonprofit Organizations” by Letts, Ryan and Grossman that points out one of the unintended consequences of a narrow focus on replicating practices established elsewhere; the decoupling of idea generation and program development (i.e., the “creating and innovating” part of the business) from service delivery systems and capacity (i.e., the “doing” part of the business). The authors argue that adaptive capacity – the capacity to innovate based on identifying and responding to needs – is a critical element in high performing non-profits, just as it is in successful for-profit companies around the world. By outsourcing idea generation and program development, non-profits lose out on the powerful impact that a strong culture of innovation can have. [Read more…]

The Critical Importance of Human Connections

I recently read a Huffington Post article where the author argued that addiction is essentially an adaptation to a lack of human (social) connections. Although I’m not typically a fan of reducing complex social problems to simple explanations, my light bulb went on a BIG way. And here’s why.

For many years, I’ve been involved in developing models for social service programs. Those programs frequently identified “connection to community” or “connection to personal networks” as a stated outcome. It just became a kind of given; I mean, of course we should try to make sure people are connected with the resources and supports that could help them! It was an acknowledgement that no human service program could do it all or be there forever. It was the ‘after-care plan’. Perhaps we thought we could work ourselves out of a job. The Huffington Post article made me start to think that I’ve had it all backwards. Maybe connections – creating and maintaining meaningful relationships with people that we care about and that care about us – should be the first order of business in all human service work. Or in life, for that matter. Maybe instead of starting out by asking “what’s your diagnosis?” or “what medications are you currently taking?”, we should ask “who loves you?” or “who will be there for you when you get home tonight?”. [Read more…]

Is Organizational Alignment Necessary for Success?

Creating organizational alignment. It’s the stuff of awesome posters with images of rowers all pulling with perfect precision towards the finish line. It’s the holy grail of organizational strategic development work. I’ve been among the apostles believing that a key to success for organizations delivering human services is creating a high degree of alignment around a mission or a set of common goals. In fact, I’m a fan (and user) of the Balanced Scorecard approach for organizational planning which is all about alignment. But it’s much easier said than done. And to what degree is it necessary? There is some evidence to suggest that, although some degree of alignment is necessary, a high degree of alignment doesn’t actually distinguish the good from the great when it comes to human services agencies. [Read more…]

Happy Holidays to All Our Friends!!!

My colleague Kim and I would like to wish all of you a happy holiday season.  Instead of the usual card thing, we decided to sing you a little song.   Check out our amazing video here!!!

Take care and looking forward to see all of you in the new year!

Five Tips for a Successful Accreditation

Going through the CARF Accreditation Survey process can be challenging. And as if you didn’t already have enough on your plate! It sometimes seems like they end up coming at the worst possible time.  While there is no magic bullet, here are some practical tips that I think will make your life a LOT easier!

Have Your Ducks All Lined Up
If you’re smart about it, getting ready shouldn’t cause too many grey hairs. As Louis Pasteur said, fortune favors the prepared mind. I strongly suggest that agencies start out with a gaps analysis; a standard-by-standard review to determine where (and how) they meet the standards as well as where the gaps are. Although tedious, it makes your life much easier down the line. The challenge is that many standards have a logical connection to other standards. Organizations that start at the beginning of the CARF manual and make changes as they work through it, or assign responsibility for different sections of the manual to different people, will run into problems. They fail to see the interconnections and overlap until it’s too late. Seeing the bigger picture up front is critical. I also strongly recommend imbedding standards requirements in existing agency systems or processes. Ask yourself “Can this requirement be met by adjusting an existing form or adding an element to an existing client or team meeting process?”. Think ‘Two-Fers’! Minimize the impact on the front line by using what you’ve already got!

Know Thy Survey Team
About two months prior to the survey, you’ll get an email from CARF letting you know who is on your Survey Team. Although the members of the team can change right up until the survey start date, it’s worth checking out who they are. We would like to believe that all surveyors are created equal, but that simply isn’t the case. They are professionals in the field that bring their own perspectives, experiences, and biases. So check them out. Google them. Find out where they work and what they do there. Ask around to other Accredited agencies to see if they know them. When the administrative surveyor calls you to discuss the survey (roughly a month prior to your survey start date), ask lots of questions about their background and their approach to surveying. Although a prepared organization should do well regardless of who the surveyors are, having a sense of what to expect from the team can make a world of difference to how smoothly things go.

The Secret to All Good Events; Planning!
Remember that a survey, in essence, is an event. It follows a schedule, has specific elements, and involves different groups of people with different roles. Surveys go best when they are well planned. Work with the Survey Team to develop a detailed survey schedule. Do your best to make sure each part of the survey happens as scheduled. Have point people that act as a liaisons to the different survey team members. Although there are bound to be some small glitches, you want it to go as smoothly as possible.

Make It Easy
Surveying can be grueling! You fly to a place you’ve never been before to meet up with people you’ve likely never worked with before to spend several intense days at an organization you know little about. While surveyors are paid by CARF, it’s usually a meager pittance compared to what they make at their day jobs. They do it because it’s an opportunity to give back, to learn from others, and to see how things are done in other places. So make their life easy! Part of that is being prepared and planning for the survey, which I’ve already discussed above. In addition, make sure that the materials you provide them are clearly marked and ideally referenced to the standard to which they apply. Provide a nice space to work in. Make sure they have the necessities of life; coffee and a clean washroom. Help them to figure out arrangements for lunch and give them some recommendations for dinner. Make sure you recommend a decent hotel that isn’t too far away. Although leaving a welcome basket at their hotel isn’t required, I always appreciate when an agency leaves something to welcome me – a note, or some information about the local community. The little things can truly make a difference.   Most surveyors ask for some agency materials to be left at the hotel the night before the survey starts anyways, so that’s your opportunity to welcome them!

Remember, It’s Your Survey!
I can’t stress this one enough; this is YOUR survey! You are paying for it (directly or indirectly, depending on the jurisdiction). You should expect good service, both from CARF’s staff at headquarters in Tucson and from the Survey Team. They should respond to your questions and be open about the process. They should be professional and courteous. You should expect them to be fair and balanced in giving feedback. They should strive to add value to your organization by offering good advice and pointing you in the direction of additional resources wherever possible. They should also acknowledged your strengths and give you the opportunity to show off what makes you proud about your agency. While CARF does its best to match surveyor skill sets to agency needs, the process isn’t perfect. You may simply end up with someone who isn’t a good match for your organization. Or if you happen to live in a city that is a sought after tourist destination, you can end up with team members who are interested in a ‘Survey-cation’ (thankfully, that’s rare). Bottom line – if you’re not happy, let the surveyor team know about it! And if that doesn’t work, let CARF know about it.

I hope these tips help.  And if you have others that you’d like to add, please add a comment below!

Why Valuing Our Mistakes Matters

I’m a fan of Ted Talks. And two of my favorites are Brené Brown’s talks in 2010 and 2012. In the 2012 talk, she spoke about the importance of valuing our failures. I couldn’t agree more.

Brené is among numerous authors and speakers that have talked about how, as we mature, we are socialized to become averse to taking risks that could result in failure. None of us would have learned to walk had we been as averse to failure in childhood as we become in adulthood. We would have hoisted ourselves up, fallen right back down, looked around self-consciously, and promptly decided that the prize wasn’t worth the potential damage to our self-esteem. We would have then rationalized our choice by saying that walking was really overrated anyways! A humorous example, but how often does it describe our own actions when it comes to taking a chance and putting ourselves out there?

While I think that the topic has importance for how we live our lives as individuals, I also think it has relevance in the context of how we manage human service programs. Do we allow staff to take some measured risks? Do we encourage thoughtful experimentation knowing that some experiments might fail? To be clear, I’m not suggesting that we intentionally put people’s lives at risk or that we set people up for failure. I just believe that always doing the same things means always getting the same results, and sometimes that isn’t good enough. The issues our clients and communities face are getting more complex and the resources to address them are becoming increasingly scarce.

So how do we go about encouraging mistakes or failures to address these complex issues and problems? Complex Systems Theory is a place to start. Dave Snowden, a well-known author and speaker on the subject of complex systems, talks about moving from a ‘fail-safe’ mindset to a ‘safe-to-fail’ mindset when addressing complex problems. Snowden and others suggest that successful solutions to complex problems are often novel and emerge out of the context in which they are occurring rather than being imposed from above or outside. By allowing people to experiment in a ‘safe-to-fail’ environment (i.e., no one is going to get hurt or be punished for failure), you encourage the kind of innovation that is needed. The experiments that don’t work are dampened and the ones that do are amplified.

So why aren’t we taking this approach to solving problems? Why aren’t we engaging in actively trying new things knowing that we will likely be met with at least some measure of failure? Although part of it is the potential to waste time and resources (which are scarce) on things that may not work, I think the real issue is that mistakes and failures expose us. It requires us to be vulnerable as managers and leaders. But as Brené says in her Ted Talk, “Vulnerability is the birthplace of innovation, creativity, and change.”.

The Role of ‘Best Practices’

Over the past decade, the human services sector has become more and more focused on identifying and utilizing ‘best’ or evidence-based practices to achieve better outcomes or improve the cost-benefit ratio.  At face value, this seems like common sense.  That’s part of the attraction of it.  But scratch below the surface a little and you soon realize that it isn’t as straightforward as it seems.

In order to use a ‘best’ or evidence-based practice, you first have to figure out what that is.  Although there are organizations that research and publish information on practices that they believe meet the criteria of ‘evidence-based’, practices haven’t been identified in all areas.  For example, the Campbell Collaboration (one of the organizations that researches evidence-based practices) was not able to affirm a best practice for developing independent living skills in youth.  In other areas, there are a number of practices that could be considered ‘best’ depending on who you talk to or what you read.  To complicate the matter further, not all practices that show promise get subjected to rigorous research that could validate them even though they may be just as effective as those that have been researched.  Research is driven in large part by the interests of those involved, which is why a significant portion of medical research is funded by drug companies and focuses on how well their drugs work.  If you are able to identify a practice that you think fits the need, there’s the practical reality of trying to implement it in a setting that may be very different than the one in which it was researched.  A drug designed to treat a specific condition under specific circumstances won’t work for other conditions or in other circumstances.  Why would we expect that human services practice would be different? All of this should lead us to question whether the starting place should be a search for a ‘best’ or evidence-based practice as a means to improve outcomes.

That leads me to back to a subject I discussed on another blog; Complexity Theory.  As Dave Snowden from Cognitive Edge states in his Harvard Business Review article, “Best practice is, by definition, past practice”.  As such, it works ‘best’ on relatively straightforward, simple problems.  The challenge is that most of what we face in human services is highly complex – complex families, complex personal problems and histories, and complex community circumstances.  These are the kinds of problems that require lots of heads to come together and allow for the appropriate solutions to emerge and be tested.  The solutions that work get amplified.  The ones that don’t are set aside.  That means having a certain tolerance for risk, something that was pointed in an article by Stevens and Cox (2008) that focused on how Complexity Theory could inform Child Welfare and Residential Child Care practice.  But the payoff is having highly customized solutions that are built from the ground up and have much greater buy-in from those involved.

To be clear, I’m not suggesting that we should do away with researching and using ‘best’ or evidence-based practices.  The point is that we should understand them as tools in our toolkit that should be used when and where it makes sense.  As Dave Snowden points out, part of addressing complicated and complex problems can involve breaking them down into constituent parts.  These parts of a problem may be perfect candidates for applying a ‘best practice’, but only after we’ve endeavored to understand and grapple with the bigger picture and feel confident that the practice we have in mind is a good fit.  Using a best practice should never be cooker-cutter because our clients and their circumstances aren’t cookie-cutter.  That means developing solid assessment and critical analysis skills in our front line staff; supporting them to be okay with ambiguity and the unknown; trusting that solutions may already exist if we take the time to bring people together and allow them to emerge; and being willing to take some measured risks to get to real and lasting change.  In this context, I believe that ‘best’ or evidence-based practices play a supporting rather than central role.

Does Accreditation Matter?

As a facilitator of workshops for the accrediting body CARF, I often get asked about the value of accreditation.  As anyone who has been through it will tell, it’s a lot of work to prepare for accreditation!  And the reality is that many agencies don’t have a choice.  Their funding body mandates them to be accredited.  Regardless, given the scarcity of resources to provide much needed services, it’s fair to ask if it provides a meaningful return on the investment.

The reality is that accreditation is not a silver bullet that automatically turns mediocre (or bad) agencies into good ones.  It’s a review that focuses almost entirely on systems and processes.  There aren’t standards that can force agencies to be innovative or entrepreneurial in their approach to problems.  While there are standards that require the implementation of outcomes measurement and quality improvement processes, the accrediting body isn’t there to warrant that you’ve achieved all of the outcomes or that your improvement process has yielded significant results.  More than a few organizations have gotten through accreditation with a ‘good enough’ approach to conformance.  And at the end of the day, having a couple of people show up once every three or four years is hardly an intensive audit that could provide 100% assurance of quality services.  But that isn’t why accreditation matters.  I think it matters because it provides a time-tested tool for helping us to get better at what we do.  It supports the implementation and continuous improvement of solid and consistent service delivery systems.  While that might not be sufficient to spur innovation in service delivery, it is a necessary and stable foundation from which improvement and innovation can occur.  Through accreditation, we get a chance to take a hard look at how we do things and to learn from (and share with) others in the field.  And I think that getting accredited sends an important message about accountability to our clients and other stakeholders.  It says that we are willing to allow ourselves to be judged against international standards and that we are interested in being the best we can be.  While there are organizations that get through it without truly buying in to the opportunity it presents, that isn’t an indictment of accreditation.  It simply reflects poorly on them.

Trust, Autonomy & Leadership

In preparation for a workshop I recently did for Leadership Victoria  titled “Leading in Uncertain Times”, I’ve been thinking a lot about the role of leadership in solving complex problems.  I’ve believed for a long time that leadership matters.  My PhD dissertation focused on the relationship between program leadership and client outcomes.  I discovered that helping clients deal with complex issues in their lives was as much about how workers related and attended to them as it was about what particular intervention model or approach they used.  The characteristics of relating and attending to clients were largely mirrored in leader-worker relationships.  In other words, workers valued the same kind of relationship with their leader as client’s valued with them.  I discovered that trust and autonomy were at the core of those relationships.

As I closed the loop on my dissertation and looked for links between what I had learned and the existing leadership literature, I became drawn to Complexity Science.  Trust and autonomy are key aspects of emerging models of leadership and organizational decision making based on Complexity Science.  Several authors have contributed greatly to my understanding of this area, most notably Dave Snowden and Margaret Wheatley.  The Cynefin model developed by Dave Snowden and his associates at Cognitive Edge has powerful potential as a decision making model for leaders seeking to solve highly complex problems.  Their model cautions against a ‘command and control’ approach to solving these kinds of problems, focusing instead on probing and sensing before moving to action.  In other words, we have to trust that those in the system or network have the answers and provide enough space (i.e., freedom and autonomy) for those answers to emerge.  Margaret Wheatley’s writings, some of which are available on her website, focus heavily on the use of small and large group processes to solve complex problems.  She too places a high value on trusting that solutions exist and that human’s underlying drive for freedom and creativity is key to solving complex problems.

Although I still consider myself a relative beginner in understanding how Complexity Science can be used by leaders to solve complex problems, I’m excited by the possibilities.  This approach challenges us to resist the urge to rush in and take control, to rely heavily on our relationships and networks of individuals with a stake in the problem, and to trust that solutions exist and will emerge if we provide creative space for that to occur.

© 2023 WRH Consulting - Website by Working Design -