Search tips
Search criteria 


Logo of scippLink to Publisher's site

Deborah Garnick: Dr. McCorry provides a concise overview of several large, national efforts in performance measurement and quality improvement. He has done an excellent job of bringing together the work of the Washington Circle, Network for the Improvement of Addiction Treatment (NIATx), National Outcome Measures, and Clinical Trials Network. The article is a fine starting point for someone to get a sense of the landscape and to jump off, using the links and references he provides, to more detail about each of the projects.

Daniel Kivlahan: I particularly like the image of the three-legged stool, emphasizing how interrelated these three major themes are— the content, the data and measurement features, and then the quality improvement efforts. That’s the broad context that makes a huge difference in how far a particular agency is likely to get with implementation.

Each of the projects discussed in the paper provides a different spectrum of options for instituting quality and performance measurement and improvement. The NIATx system starts at the front door of the organization, so it can give a lot of clues about patient-level experiences and barriers to better outcomes that programs might overlook. The National Registry of Evidence-Based Programs and Practices becomes useful when clients have gotten through those early treatment hoops and are waiting for at least some initial intervention.

Linda Bradshaw: Of Dr. McCorry’s tips on how to get started, I was impressed by the create-a-crisis concept: challenging your local boards and people in your agency to take a hard look at the wave of the very near future and start getting ready for it. That seems a very practical way to go about getting someone’s attention.

Kivlahan: Another approach might be to ask the line staff what kind of information was on the last list or spreadsheet they saw. For example, staff members frequently get lists of chart deficiencies, things they haven’t documented appropriately. Reviewing these together would reinforce the commitment to measurement by reiterating the importance of the items on the list. The discussion might produce a consensus that you are tracking the right things, or it might lead to a shift to other, more productive measures.

Selecting practices

Garnick: The National Quality Forum report, Evidence-Based Practices to Treat Substance Use Conditions, is currently available on the Web for public comment. I think people will be pleasantly surprised to see that it talks about general practices and approaches, not specific applications. For example, it calls for more efforts at screening people for substance abuse or alcohol problems, but does not specify whether you should use instrument A, B, or C. The goal is to give providers a sense of which approaches have good evidence behind them without binding them to a cookbook-style approach.

Kivlahan: The Forum’s perspective on psychosocial approaches, similarly, will be that we don’t have compelling evidence that there is one treatment of choice. I think providers will appreciate this. When they are asked to adopt evidence-based practices, providers often want to know: What about this approach is fundamentally different from what I was already doing, and why is it going to work better? The Forum’s perspective is: We can’t yet clearly identify the precise elements of our evidence-based practices that make them effective; if we could, we might very well find that many of our excellent clinicians are already supplying those elements in the care they give. Therefore, as long as what clinicians are doing fits in with some evidence-based rationale, it makes no sense to ask them to change to a different model just because it appears on a list.

While evidence-based practices are indispensable starting points for quality assurance and improvement, they do not automatically resolve all issues. One important concern is that they don’t yet guide care over the course of treatment. A lot of the tough calls that are made over the course of care aren’t guided by the kinds of things that appear in the National Registry. They are process-of-care decisions. This isn’t unique to addiction; it is the case in other medical areas as well.

As important as implementing proven models is discontinuing approaches that don’t work well. Kicking people out of programs for relapse is a good example, where the consequences are negative and pervade a whole clinical culture. Lists of evidence-based practices don’t specify what should be de-implemented, but these decisions are hugely important.

The choice of indicators

Kivlahan: Much rides on measuring the right indicators. There are often unintended consequences if you pick something inappropriate or don’t recognize how what you choose is linked to other important pieces of the service delivery system. In the VA, we tried for years to get providers to do a systematic assessment with the Addiction Severity Index at intake and then follow up with patients 6 months later, whether the patient was still in treatment or not. It was a frustrating experience for everyone, because most patients were long gone after 6 months. Finally we switched to tracking the percentage of patients who are still actively involved in treatment after 3 months. That provides us with an adequate and much more practical nationwide, benchmarked indication of how well programs are retaining patients.

Retention is the best proxy for outcomes, and the new indicator works great overall. But even it works at cross purposes with some goals. As we push to identify needs and manage care outside of specialty settings, we are finding that nonspecialty providers may avoid offering care that will trigger responsibility for tracking the indicator.

Garnick: That’s why the Washington Circle has worked hard to have the National Committee for Quality Assurance adopt all three of our measures, including patient identification, treatment initiation, and engagement. We are concerned that if NCQA only looks at identification, health plans will have an incentive to do all sorts of screening and outreach, but will not follow through with services for the people they identify. Alternatively, if NCQA omits the initiation measure, the plans will have no incentive to reach out and try to find the people in their health plans who need services. Instead, they would be rewarded for making sure a small number of clients stick with their treatment for the initiation (14 days) and engagement (another 30 days) periods.

Bradshaw: Several States—Oklahoma, North Carolina, and Connecticut are examples—are taking a top-down approach. They use data that programs already are submitting in administrative filing to create reports, based on the Washington Circle and other measures, which they feed back to providers. This is on a quarterly basis, so turnaround is pretty fast. The idea is to try to get providers across the State on the same page with regard to a relatively parsimonious set of items, looking at what the rates are, how they vary, and how they can be influenced.

To date, the States using this approach have not been very successful in talking with providers about how to interpret and use the measures. Still, I see some promise in the effort. For example, one of their measures is how many clients had follow-up service within 14 days after being discharged from relapse, and at first they found a very low rate. This led to the revelation that their stand-alone detox provider did not understand that it was responsible for making sure clients got to treatment afterward. When the provider grasped that this was something that mattered to the State, they brought in a case manager.

Pay for performance

Bradshaw: Delaware’s pay-for-performance system uses standards that are closely related to those of the Washington Circle. They specify the frequency of treatment at each stage—I think twice a week for the first 4 weeks and a little less than that for the next 4 weeks. For the top level of funding, programs have to document that 90 percent of clients reach those goals. Dr. Jack Kemp, the State director of alcohol and drug services, and Dr. Thomas McLellan, the administrator, feel they have had great success with the program. They will tell you, though, that they’ve been greatly helped by the small size of the State, which has permitted a very communicative, hands-on approach that might be more difficult in more spread-out and diverse States.

Kivlahan: One principle we can generalize from Delaware’s approach is that standards need to be achievable. Otherwise, providers and managers feel like meeting them is just another thing that would be good to do in an ideal world.

Bradshaw: One of the tensions in the move toward pay for performance in substance abuse treatment is that not paying the under-performing organizations puts them in a place where they lack the resources to make efforts toward quality improvement. It’s a bit of a catch-22. There are ways to design around the problem—for example, paying programs on the basis of improvements they make over their own baselines.

Kivlahan: Agencies might be supported based partly on the extent to which they are willing to engage in the challenge of measurement. Some agencies see measurement as a big challenge that’s only going to set them up for trouble, but, in my view, it is essential. There has to be measurement, there’s got to be feedback to people about how they are doing on the measurement, and then there has to be coaching to help programs that fall short come up to the standard. If places refuse to engage in that process, I don’t see how they can improve.

The mother and the secretary

Bradshaw: Dr. McCorry’s hypothetical mother’s situation points up the current lack of guidance for patients and families who need to choose a treatment program. The mother’s best bet would be to call programs and interview them. Honestly, though, I think few people even know what questions to ask to find out about a program’s performance or how well it is likely to fit an individual patient.

Garnick: For this purpose, we might want to think about the analogy to the general medical sector. Many States post hospital report cards on the Web that are based on generally accepted performance measures. There are a lot of challenges to trying to come up with accurate data and statistical methods for these kinds of report cards. There is a large literature on consumers’ ability to understand such information. Nevertheless, it is being done, and making substance abuse providers’ performance data similarly available to the public may be a logical next step once the measures now under way are developed, tested, and implemented.

Kivlahan: I’m not yet convinced that performance measures have immediate implications for choosing programs at the level of the individual patient. I think they have their greatest potential for helping programs improve their own performance. A consumer’s natural inclination is to try to find out if other people have been satisfied with that service. The evidence I’m familiar with indicates that there isn’t a close relationship, either in addiction treatment or in other health care areas, between satisfaction and outcome.

Garnick: That’s true, but some of the report cards that are being put together for medical provider groups are focusing on whether or not patients in a practice receive the preventive services that they should have—that kind of thing. If I were looking for a provider, I’d look for one that was organized enough to offer me the annual preventive services I need. If I had a chronic condition like diabetes, I’d want them to be checking off the six or eight things they should be doing for me each year.

Kivlahan: I think it would be difficult to get at anything equivalent to success rates in heart surgery, with a chronic disease like drug abuse. For patients in an acute state or their family members, there are some structural elements that might be considered. Does a program systematically monitor abstinence, which will usually be with urinalysis or breathalyzers? Do they have the staffing depth necessary to address all the patient’s co-morbid conditions? Do they have somebody on the staff who is a capable prescriber, or do they have a close linkage with resources that can do so?

Garnick: I hear us disagreeing about which measures would be reasonable and useful to families or potential clients, but agreeing that there should be a systematic way for people to access this kind of information.

Kivlahan: The assistant secretary Dr. McCorry describes in his opening scenarios will have to determine what counts for his State and choose performance measures accordingly. Do they want to spread minimal services very broadly, which would mean maximizing the number of individuals seen by an agency? Shouldn’t they also want to guarantee that all patients can receive at least a minimally sufficient dose of treatment, which would require standards for retention and whatever other services promote retention? These are difficult tradeoffs— and more so in an environment of funding cuts.

Performance and quality measurements don’t remove all of the hard decisions. Programs, administrators, and clients still have to decide what they value.

Articles from Science & Practice Perspectives are provided here courtesy of National Institute on Drug Abuse