Mental health is key to healthy living
The primary goal of addiction treatment is to improve an addict’s health and quality of life. According to the World Health Organization (WHO) health is defined as a state of complete physical, mental, and social well-being and not merely the absence of disease. This definition emphasizes the importance of quality in health service organizations. But how do we know if an addiction treatment center is providing quality services, or not?
The most common mistakes addiction treatment program planners and decision-makers make is relying more on personal experience and opinions than on evaluation data that is systematically gathered and analyzed. So, how can you be certain that your own practice is reaching people effectively?
Is your addiction practice effective?
In this interview, we speak with analyst and consultant in human services, Ashley Hall, about the importance of effective evaluation in addiction practices. A data nerd, Ashley specializes in customized analysis that can fit your program needs.
Continue reading to learn more on how you can objectively look at your own practice. Then, we invite you to post all your questions, opinions and/or personal experience regarding evaluation of addiction treatment programs at the section in the end. In fact, we try to respond to all questions personally and promptly!
ADDICTION BLOG: Hello Ashley. Thank you so much for joining us! To begin…why are treatment programs NOT evaluating their own effectiveness?
ASHLEY HALL: Evaluation can be a complicated process, and with limited funding and time, it can often take a back seat to other more immediate practice concerns. The challenge is that evaluation helps us to figure out how our practice is doing, if we are meeting our goals, and if we are actually helping our clients. A good strategic plan with a plan for evaluation built into it can help to alleviate some of common evaluation pitfalls.
ADDICTION BLOG: Can you explain the phases of addiction treatment evaluation?
ASHLEY HALL: I work with all kinds of human services organizations, and what I love about evaluation principles is that they can be adapted to all types of practice.
A lot of what we do as evaluators depends on the organization itself. If we have an addiction treatment center that focuses primarily on addiction in youth, we need to evaluate how well we are accomplishing the goal of preventing/treating addiction in youth. This can be done with:
- individual assessments
- pre-post interviews
- longitudinal analysis
- even experimental design (if multiple treatment programs are being offered)
Beyond program effectiveness, we may also want to look at satisfaction with the services, which is typically a survey analysis, or a more broad community-level impact, which is challenging to measure but doable with enough resources. The bottom line is, evaluation methods are often as varied as the organization themselves.
ADDICTION BLOG: How can the evaluation results be used to improve an addiction approach?
ASHLEY HALL: I am a huge fan of data-driven decision making. Evaluation results should be factored into any program decisions, large or small.
For example, if a survey is sent out to clients of a treatment facility and the results show that clients are rarely available before 10am, does it make sense to have office hours from 7am-1pm?
A little more disturbing, if evaluation data is showing that a particular program is not helping clients, we have a responsibility to discover why that is and make appropriate changes. Sometimes organizations are trapped in a cycle of assuming that their programs are performing at peak efficiency, but the data might show something different. We won’t know that until we evaluate.
While vision and intuition is not to be discounted, it is no substitute for solid data that comes from well-planned evaluations.
ADDICTION BLOG: How much time and money is usually required to effectively audit an addiction treatment programs’ efficacy?
ASHLEY HALL: This is an impossible question to answer.
Evaluation may be a simple endeavor requiring only 40 consultant hours per fiscal year, or it may require 40 hours per week for three full weeks just to lay the groundwork. It all depends on the organization.
I always suggest that an evaluation process be built into the strategic plan of any organization, large or small. A solid strategic plan with specific and measureable goals each of which include a plan to evaluate progress toward reaching that goal will go a long way in simplifying the evaluation process.
In addition, having a good method of collecting and storing data can drastically reduce the amount of legwork needed to complete a thorough evaluation. If an analyst or consultant has to spend the first 20 of 80 hours just cleaning data, that is increasing the evaluation bill quite a bit.
It is best to keep in mind that evaluation is not something you do at the end of a fiscal cycle or at the culmination of a time-limited program. Evaluation is something you plan for in the pre-deployment stages, something you do on a regular basis during program deployment, and something you do at the end of a cycle/program.
As is the case with most things in life, the more organized you are at the beginning, the easier and less costly things will be along the way.
ADDICTION BLOG: How often do addiction practices need to be evaluated (3-6 months, a year…)?
ASHLEY HALL: I hate to be repetitive, but it really depends.
Some programs have specific requirements laid out by funders for evaluation, some don’t have any outside influence telling them to evaluate. For programs that are ongoing, such as treatment facilities, yearly evaluations are pretty typical and they usually look at a variety of things:
- Fiscal efficiency
However, you can’t cripple an organization with cumbersome evaluation practices. For time-limited programs, such as a 6-week prevention program in a high school, for example, an evaluation schedule is going to look very different.
It also depends on the program goals and objectives. If an educational program has a goal of increasing knowledge about addiction, evaluating each participant with pre-post tests/interviews will be needed in addition to high-level program effectiveness, process, or impact evaluations. Outcome evaluation is pretty common in the human services, and this should be done at least yearly, but can be done at other logical times in the program as well.
ADDICTION BLOG: Should/Can effectiveness and tracking be state or federally mandated? Why or why not?
ASHLEY HALL: This is a tough one. To be honest, and this is just my opinion, I don’t think that any additional regulations are really necessary. In a lot of cases, federal, state, or private funds have evaluation standards already attached to them.
The fact is, clients and donors are already looking at data (think Yelp reviews!) and organizations are naturally evaluating and improving practice or are losing their revenue streams. I think an emphasis on training and support for evaluation would be a better use of time and energy than cumbersome regulations.
ADDICTION BLOG: Can/Should “best practices” or “evidence-based” treatment programs be standardized in the U.S.? Why or why not?
ASHLEY HALL: Evidence-based practices are important, and we are already seeing the beginnings of standardization. Health insurance organizations, including government provided health insurance, already require treatment to be evidence-based in many cases. I think regulating in that way, with stipulations attached to funding, allows for safe practices for patients/clients, but also allows for alternative options to be available in the open market.
I would hate to see a meditation treatment that clients find to be very helpful outlawed simply because the appropriate amount evidence isn’t there yet. It is also a challenge to take a hard stance on regulating evaluation of practice when funding is already so limited. Many organizations don’t have anyone on staff that knows how to evaluate, and cumbersome regulations might put a lot of our smaller organizations out of business.
Again, I am all for training for these organizations – let the funders regulate via grant requirements and insurance payments and the let licensing boards ensure practitioners are trained and practicing ethically.
ADDICTION BLOG: Can you share with our readers your evaluation methods and how do you address the findings?
ASHLEY HALL: I use a variety of methods to evaluate. My strengths are in survey design and implementation, but I am a strong quantitative evaluator as well. Many organizations have terabytes of data that already exists that can be used to evaluate practice if they know how to do it, or if they hire a consultant who is skilled in data mining and analysis.
I am fond of mixing evaluation methods to meet the needs and limited resources of my clients. If an organization is small, only has three staff members that are already over-worked, the last thing I want to do is burden them with additional paperwork. We can work with:
- client files
- case notes
- financial data
- appointment data
… and many other pre-existing data sources to get a lot of things done. When we really need some client survey information, there are plenty of ways to do this cheaply that doesn’t bog down the workers. For example, I really enjoy guerilla evaluation – evaluators have to be able to adapt to the specific culture and terrain of the organization in which they are working. Staff has no time to do a survey, how about scouring social media data or putting up a whiteboard in the waiting room asking some basic satisfaction questions?
Addressing the findings is a whole different ballgame. I am in love with data visuals as they can really get to the heart of the data in a simple and compelling way. So many times I see quality evaluations ignored because the results are communicated in a difficult-to-read narrative report. I like to give my clients a little more credit and let them explore the visualized data to come to their own conclusions.
Impactful presentations of the evaluation results is always my goal. If an executive director can look at data and draw a conclusion in under 15 seconds, they can then quickly move onto how to use that data to improve their organization. Naturally, interpretation, executive summaries, and annual reports are an important part of communicating evaluation data, but nothing beats communicating complicated data via a simple and beautiful dashboard.
ADDICTION BLOG: Is there anything else you’d like to add for our readers?
ASHLEY HALL: I started my own consulting firm because I saw a need in the human services field. Our front-line workers don’t have the time to learn advanced quantitative or qualitative methods, survey design, or the four basic levels of evaluation (see Kirkpatrick for more on this). Our front-line workers want to work with clients, that’s why they got into this field.
In addition, organizations don’t have the money to spend on complicated evaluation processes that require sophisticated software and hours of data collection. I work with each client to come up with an evaluation methods that works best for their budget, their skill-level, their existing technology, and their time. Sometimes that means I am doing crosstabs and longitudinal studies, sometimes that means I am doing a qualitative analysis of casenotes. And sometimes, I am simply providing some guidance or training to evaluators or leaders.
The most important thing I want everyone to know is that evaluation is important and necessary. It doesn’t have to be fancy, it doesn’t have to be complex, and it doesn’t have to take months to do. It just has to answer your questions, prove your are doing good by your clients, and be done regularly.
Everything else we can figure out along the way.