PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Subst Abuse Treat. Author manuscript; available in PMC 2010 July 1.
Published in final edited form as:
PMCID: PMC2755538
NIHMSID: NIHMS123783

Management Practices in Substance Abuse Treatment Programs

Abstract

Efforts to understand how to improve the delivery of substance abuse treatment have led to a recent call for studies on the “business of addiction treatment.” This study adapts an innovative survey tool to collect baseline management practice data from 147 addiction treatment programs enrolled in the Network for the Improvement of Addiction Treatment (NIATx) 200 project. Measures of “good” management practice were strongly associated with days to treatment admission. Management practice scores were weakly associated with revenues-per-employee, but were not correlated with operating margins. Better management practices were more prevalent among programs with a higher number of competitors in their catchment area.

1. INTRODUCTION

The last decade has contributed to the development of more successful therapies for drug addiction, but many individuals with addiction remain untreated (Substance Abuse and Mental Health Services Administration 2007) and expectations for better quality treatment continue to escalate (Institute of Medicine 2006). Recent attention has focused on effectiveness of treatment programs, noting that effective treatment of the targeted population could be hindered by insufficient diffusion of good therapies (Institute of Medicine 1997, 1998, 2005) or barriers to access that are under the control of the treatment facility (Ebener, and Kilmer 2001). Others have observed that outpatient drug treatment programs struggle with weak organizational infrastructures and limited financial resources (McLellan, Carise, and Kleber 2003). Highlighting the importance of addiction treatment institutions, Kimberly & McLellan (2006) recently called for research on the “business” of addiction treatment, aiming to improve the financial robustness and clinical effectiveness of organizations that are the doorway to treatment and recovery.

Most research on organizational performance has tended to focus on measures of labor, capital, and human skills. Relatively little has been said about the role of management practices within an organization. However, recent research in economics has suggested a new way of measuring and understanding management policies within an organization. Bloom & Van Reenen (2007) surveyed over 700 manufacturing firms on 18 measurement practices and found that these practices could be measured and quantified, and that better management practices were strongly correlated with firm performance. An important aspect of their work was the use of a telephone survey designed to elicit true information on organizational practices and minimize the gaming of responses toward favorable scores. Application of this assessment tool to addiction treatment centers may shed light on variability in management practices and the relationship between stronger and weaker management practices and the delivery of drug and alcohol treatment services.

Research within addiction treatment agencies has begun to articulate the organizational characteristics associated with organizational performance and treatment effectiveness. Papers based on a 15-year longitudinal study, for example, concluded that organizational factors such as program ownership, affiliation, director qualifications, and quality practices were related to the delivery of accepted standards of care (D’Aunno 1995, 2002). Paul Roman and colleagues described the management practices of addiction treatment programs (Roman, Ducharme, and Knudsen 2006) and reported that high-performing organizations were embedded within larger organizations, but maintained decentralization with regard to their own employees (Richardson et al. 2002), provided job autonomy and adequate monetary and nonmonetary rewards for job performance (Knudsen, Johnson, and Roman 2003), and systematically monitored program performance through the use of information technology (Ducharme, Knudsen, and Roman 2006).

The notion that organizations and institutions matter is also reflected in research that modifies the delivery of substance abuse treatment services. The Network for the Improvement of Addiction Treatment (NIATx), for example, coached treatment centers to make process improvements through organizational changes and improve the quality of care (Capoccia et al. 2007; Hoffman et al. 2008; McCarty et al. 2007). NIATx is a community of drug and alcohol treatment centers participating in collaborative efforts to apply process improvement technology and enhance the quality of care for addiction treatment. The Substance Abuse and Mental Health Services Administration and the Robert Wood Johnson Foundation supported the initial grantees. Learning sessions, coaching and learning circle telephone calls supported agency efforts to develop change teams and altered the delivery of treatment services in order to reduce days to admission, minimize appointment no-show rates, increase treatment admissions and enhance retention in care. Analysis of two cohorts of participants found 40% reductions in days to treatment and 10% to 20% improvements in retention in care (Hoffman et al, 2008; McCarty et al 2007).

Based on the initial demonstrations of NIATx impact, 200 treatment centers from five states (Massachusetts, Michigan, New York, Oregon, and Washington) were recruited to participate in a randomized trial (NIATx 200). The sample of recruited sites consisted of outpatient treatment programs but did not included methadone clinics. Participating treatment centers were randomized to four levels of support for implementing process improvement: (1) learning sessions, coaching, interest circle calls and an interactive internet site; (2) learning session and web site; (3) coaching and website; and (4) interest circle calls and web site.

This paper uses data gathered prior to randomization to investigate management practices in NIATx 200 programs. Adapting the Bloom and Van Reenen (2007) approach from economic survey work, we score specific management practices (such as the use of data, goal setting, and employee incentives) and assess their association with program performance. The goal is to use baseline data from this larger experiment to describe in more detail how addiction treatment programs are managed, and to determine whether good management practices translate to improved organizational performance and client treatment.

2. MATERIALS AND METHODS

Survey design

Our telephone survey included 14 management practices grouped into 4 areas: intake and retention (2 practices), quality monitoring and improvement (5 practices), targets (3 practices), and employee incentives (4 practices). Table 1 provides a brief description of these 4 groupings and 14 practices. The first section probed strategies used to improve access and retention. The quality monitoring section focused on tracking key performance indicators in the organization, including how the data are collected and disseminated to employees. The targets section examined corporate targets (whether goals are simply financial or operational or more holistic), the realism of the targets (stretching, unrealistic, or nonbinding), and the transparency of targets (simple or complex). The incentives section examined promotion criteria (e.g., purely tenure-based or including an element linked to individual performance), pay and bonuses, and coping with underperforming employees. Based on these 14 questions on different management practices, programs were scored between 1 and 5 for each question, with a higher score indicating a better performance.

Table 1
Dimensions of management practice

This evaluation tool could, in principle, provide some quantification of addiction treatment programs’ management practices. However, as is well known in the surveying literature, a respondent’s answer to survey questions is typically biased by the scoring grid and anchored towards those answers that they expect the interviewer thinks is “correct.” To reduce the potential for this bias, we used the Bloom & Van Reenen (2007) “blind” scoring method. In this process, respondents are not told that they are being scored. Instead, the interview is based on a series of open questions (e.g. “Can you tell me how you promote your employees?”), rather than closed questions (e.g., “Do you promote your employees on tenure [yes/no]?”). For each practice, the first question is broad, with detailed follow-up questions continuing until the interviewer can make an accurate assessment of the hospital’s typical practices, providing a score from 1 to 5. Each telephone interview took approximately 60 minutes to complete.

Another potential bias might arise if interviewers knew about the program’s performance before interviewing. For example, prior knowledge that a program had short times-to-treatment might lead the interviewer to evaluate the program more generously, and record higher scores for each question. Data on client times-to-treatment, however, were collected independently; the interview did not assess days to treatment.

Since the scaling may vary across the 14 measured practices (for example, interviewers might consistently give programs a higher score on question #1 when compared to question #2), we converted the scores (from the one to five scale) to z-scores by normalizing by practice to mean zero and standard deviation one. In the analyses, the unweighted average across all z-scores is used as the primary measure of overall managerial practice.

Appendix A details the practices and the type of questions that were asked in the same order as they appeared in the survey. Appendix B gives 4 example practices, the associated questions, examples, and scoring system.

Selection of programs and obtaining interviews

NIATx 200 recruited 174 agencies with 201 outpatient treatment centers (some agencies had multiple programs spread throughout the state).The survey sample for this study consisted of 172 addiction treatment agencies. Two agencies did not respond to requests for interviews.

As part of their enrollment in NIATx 200, executive sponsors from each agency agreed to complete a 60 minute management interview. Each agency was paid $200 as an incentive to provide interview time, survey data, and other information. Surveying began on June 18, 2007 and the majority (73%) were completed by August 31, 2007, with our final interview completed on February 29, 2008. Some of the interviews were conducted at a later date because of delayed enrollment. In particular, Massachusetts was a late addition to the NIATx study, and interviews for programs in this state were conducted after November 1, 2007.

The survey team consisted of 3 interviewers with Masters degrees. Prior to the survey, interviewers spent 2 days in training to develop consistency in our interview and scoring techniques. Nicholas Bloom, developer of the management practice survey of manufacturing firms, reviewed our instrument, made a site visit, and participated in the training to ensure that data collection and scoring were consistent with the original survey. Our survey methodology was designed to parallel his methodology as closely as possible.

Of the original 172 surveys, 5 programs were excluded because interviewers marked the survey as “respondent unwilling to provide information.” An additional 20 programs were excluded because they lacked complete information on important variables, such as days-to-treatment or employee FTE. The analytic sample included f 147 surveys -- 109 (74%) executive sponsors or Chief Executive Officers (CEOs), 15 (10%) Chief Financial Officers (CFOs) or clinical managers, and 16 (11%) treatment unit directors or counselors. Seven job titles (5% of interviewees) were not ascertained.

To assess consistency in scoring, we double-scored a subset of interviews, in which one interviewer conducted and scored the interview, and the second listened and scored remotely. We describe the correlation between these scores in our Results.

Outcome Measures

Our primary outcome measure is days-to-treatment. Days-to-treatment is important because we assume that improving the intake process will increase the likelihood that patients enter and remain in care. Reducing time to treatment is a focus of current NIATx efforts (McCarty et al. 2008; McCarty et al. 2007), and a similar time-to-treatment measure is used as part of the Healthplan Employer Data and Information Set (HEDIS) to monitor health plan performance (National Committee for Quality Assurance 2007).

As part of its baseline data collection, the NIATx 200 research team collected information on days-to-treatment through monthly telephone calls to each agency. In these calls, the caller identified herself as part of the NIATx 200 team and asked the receptionist to provide the date of the next available appointment. This study analyzes data from 1081 of these pseudo-patient phone calls to 147 agencies. On average, each agency received 7.3 phone calls (range 1 to 9, standard deviation 2.2)

Additional Measures

In addition to the information on management practices, interviewers collected 2 years of information about the organization’s revenues, operating margins ((revenues – operating expenses)/operating expenses), number of employee full time equivalents (FTE), and competing programs within the catchment area. Prior to the phone interview, respondents were emailed and informed of the financial data that they would be expected to provide. The survey also asked respondents to report the number of competing programs in their catchment area.

Analyses

The primary outcome variable was days-to-treatment. We also analyzed measures of financial performance, including 2 year measures of productivity: the log of revenues/employees, and operating margins.

In the analysis of days-to-treatment, we used negative binomial regressions with standard errors clustered by program. The explanatory variable of interest is the management practice z-score, averaged across 14 practices. Other measures included employee FTE, fixed effects for state and interviewer (each interviewer conducted at least 30 surveys), dummy variables for the day of week that each phone call was placed, and dummy variables indicating the month that the survey was conducted.

The analyses of financial outcomes were based on linear regressions on the management practice z-score, log of employee FTE, and fixed effects for state, interviewer, and month in which the survey was conducted. French (French et al. 1997) notes it is often difficult for substance abuse programs to provide accurate information on revenues and costs without careful instrumentation. Thus, we limited our analyses of financial outcomes to programs that could provide this information and excluded programs that provided information that were deemed unreliable (e.g., nonpositive revenues or revenues exactly equal to expenses).

To assess the effects of competition, we conducted linear regressions on the z-score of management practice on the number of competitors in the catchment area (as reported by respondents), and fixed effects for state, interviewer, and month in which the survey was conducted.

Finally, we conducted an empirical simulation to assess potential policy implications of improved management practices. Specifically, we use our model to estimate the change in average waiting time from first phone call to first appointment that would occur with a hypothetical intervention that targeted programs with management practice scores below the 50th percentile (i.e., relatively low performing), under the assumption that this intervention would transform these practices so that their practices would be equivalent to scores at the 75th percentile (i.e., relatively high-performing).

We develop these results in three steps. First, we run our model on all programs and save the coefficients. In the second step, we use those coefficients to generate predicted waiting times among the programs with management practice scores below the 50th percentile. In the third step, using these same coefficients, we generate predicted waiting times among the same group of programs, under the assumption that their management practice score was at the 75th percentile. We use the difference of the means in steps 2 and 3 to determine the change in waiting times that would be associated with improved management practices. We derive 95% bias-corrected confidence intervals (CIs) using bootstrapping with 1,000 repetitions.

The Institutional Review Board at Oregon Health & Science University reviewed and approved study procedures.

3. RESULTS

Summary statistics for the sample of 147 outpatient programs are provided in Table 2.

Table 2
Descriptive characteristics

To test for the presence of measurement error in the management practice scores, interviews were double scored for 14 programs, with one interviewer conducting the interview and scoring the survey tool, and a second listening remotely and scoring independently. The correlation was strongly positive (a correlation coefficient of 0.80; p < 0.001).

Figure 1 shows the distribution of the average management scores per firm across all 14 practices, in raw form (not in z-score form). The distribution is skewed to the right, with a relatively small group of programs exhibiting high scores, and a larger group clumped toward the lower end.

Figure 1
Distribution of Management Practice Scores across 147 Substance Abuse Treatment Programs

Table 3 displays the results of the negative binomial regression on days-to-treatment (time from first phone call to first appointment), displaying the coefficients and standard errors for key variables (the management z-score and employee FTE) while suppressing the coefficients on dummy variables. Higher management practice scores were significantly associated with shorter waiting times (p < 0.001).

Table 3
Negative binomial model results for days-to-treatment

These results are based on an average overall management score, constructed from scores on 14 separate questions. We investigated the role of individual practices and found that 7 of the practice z-scores were individually significant at the 5% level or above, while 7 appear insignificant. The 7 significant practices included (Q1) intake, (Q2) retention, (Q3) continuous improvement, (Q4) performance tracking, (Q6) reviewing agency performance, (Q8) target balance, and (Q10) performance clarity. The results of these 14 individual regressions are provided in Appendix C.

We also calculated the average score separately for the four groups of management practices and re-ran the model using these scores. Management scores for 3 of the groups were significantly associated with days-to-treatment: intake/retention (−0.181, p < 0.01), quality monitoring and improvement (−0.245, p < 0.01), and targets (−0.229, p <0.01). The management score for employee incentives was not significant (−0.071, p = 0.51).

Table 4 investigates the association between financial measures and management practices. Management practices were weakly associated with revenues per employee (P < 0.10). There was no statistically significant association with operating margins.

Table 4
Linear regression results for financial outcomes

Table 5 investigates the association between management practice scores and the number of competing programs in the catchment area. Higher scores were associated with programs in more competitive areas (p <0.05). We were concerned that this result might be partly a result of a small group of programs that existed in isolated areas with no other competitors or only one other competitor. Eliminating these 39 programs increased the coefficient (0.09) as well as the standard error (0.04) but the association was still significant at the 5% level.

Table 5
Linear regression results for Management Practice z-score, number of competitors and controls

Finally, we conducted an empirical simulation to estimate the effects of improving management practice scores among the subset of programs with scores below the 50th percentile. In this simulation, we used the coefficient from our model displayed in Table 3 to generate 2 estimands: average waiting days, and the percentage of patients waiting more than 7 days for the first appointment. We generated these estimates for our subset of programs under 2 scenarios: with management scores as originally scored, and with management scores set at the 75th percentile. The results of this simulation are displayed in Table 6. We found that this hypothetical intervention would reduce waiting times by an average of 2 days (95% CI (0.3, 3.7)), and that it would reduce the percentage of patients waiting more than 7 days by 9.1% (95% CI (0.4%, 15.2%)). Although we cannot attribute causality to the management scores on times to treatment, these simulation results are more directly interpretable than the coefficient estimates presented in Table 3.

Table 6
Estimated change in waiting days associated with improving management scores in low scoring programs

4. DISCUSSION

Management practices in substance abuse programs were strongly associated with client days-to-treatment. The association between management practices and revenues was weaker but suggestive, with management scores positively associated with revenues-per-employee at the 10% significance level. We found no statistically significant association between management practice and operating margins. Better management practices were associated with a higher number of competitors in a program’s catchment area. Overall, these results support the importance of management practice in the way that it affects client treatment, and potentially in the long-term performance of addiction treatment programs.

While other research on substance abuse treatment programs has begun to describe the importance of management practice and organizational factors in client outcomes, our approach differs substantially. In particular, most studies on substance abuse treatment programs can be classified as qualitative, or as quantitative analyses based primarily on administrative data or relatively simple survey questions. This study used a scoring method based on telephone interviews with open ended questions. This may provide a more detailed measure of the internal organization of treatment programs.

The results of this study are similar to the findings of Bloom and Van Reenen (Bloom, and Van Reenen 2007), who developed the management practice score survey tool and administered it on over 700 manufacturing firms. They found the management practice score to be positively associated with revenues and profitability. Furthermore, they find that good management practices are more prevalent when product market competition is strong.

In contrast to Bloom and Van Reenen, we do not find a strong association between management scores and financial outcomes. The lack of a strong statistical association may be partly attributable to our relatively smaller sample size (approximately 200 observations on 100 programs compared to their 5000 observations on 700 firms). It may also reflect the poor quality of financial data gathered in our interviews (discussed in our Limitations below). Furthermore, it may be difficult to compare outcomes in this dimension, since manufacturing firms exist primarily to make profits, while substance abuse treatment programs consist largely of nonprofit and government programs whose primary goal is to serve their clients.

Our management score was based on an average of 14 questions grouped into 4 areas: intake and retention; quality monitoring and improvement; targets; and incentives. In separate regressions using average scores in these 4 areas, we found that the average scores for the first three groups – intake/retention, quality monitoring, and targets – were significantly associated with days-to-treatment. The first two groups of management practices reflect careful attention to how the client is received and welcomed, as well as how the agency monitors the client’s care. These are the types of practices that have been recommended by NIATx and similar programs. The third group of management practices, targets, addresses the types of goals that the agency sets for itself, the difficulty in these goals, and clarity with which goals are communicated to employees within the agency. These management practices do not appear to have been explicitly recommended by any of the national improvement initiatives. However, their association with lower waiting times is consistent with the concept of organizations that consciously sets and pursues ambitious goals for improved client treatment.

One area, incentives, did not appear to be associated with days-to-treatment. The incentive group of questions asks about mechanisms for hiring and firing employees, promoting good performers, and retaining talent. Individually, none of these questions were associated with waiting times or financial outcomes. There are several possible explanations for this finding. First, it may represent a failure of our instrument to translate these specific questions from manufacturing to the domain of substance abuse treatment programs. Alternatively, the finding may reflect general difficulty in employment in the arena of substance abuse treatment. For example, our interviewers noted that programs often complained that it was extremely difficult to fire employees because employee turnover was high to begin with, and if they were to fire an employee they could never be sure that they would find a replacement in a timely manner.

Limitations

This study had several limitations. Although management scores were strongly associated with days-to-treatment, the relationship may not be causal. For example, programs that have short times-to-treatment and higher revenues per FTE (perhaps due to patient payer mix) may have resources that enable them to improve management practices. This would bias the coefficient on the management score upward. In contrast, programs that are high performing (perhaps attributable to hard working clinicians and staff) may feel less pressure to improve their performance and may invest very little in their management practices. This would bias the coefficient on the management score down.

The primary outcome of interest, days to first appointment, was collected by a phone call from a NIATx 200 investigator who identified herself as part of the research project. Thus, these pseudo-patient calls were not blinded. Knowledge that the caller was not a patient and that the information was being reported may have led to measurement error in these variables.

This study is also limited by the lack of good quality financial information available from the treatment programs in our sample. Interviewers noted that respondents often had difficulty reporting financial data, even when the data should be readily available (e.g., annual revenues). Expense or cost data may be even less reliable, as evidenced in part by the extensive instrumentation and efforts of Mike French and his Drug Abuse Treatment Cost Analysis Program (DATCAP) (French et al. 1997; French, Salome, and Carney 2002; Roebuck, French, and McLellan 2003).

The majority of our interviews were with executive sponsors, and their perception of management practices may be different than lower-level managers. The average z-score for the 109 executive sponsor interviews was 0.04 (standard deviation = 0.59); The average z-score for the 38 interviews that were conducted with other (non-executive sponsor) managers interviews was −0.13 (standard deviation = 0.70). Thus, there is some evidence that of a disconnect between higher level managers and those who are on the front line of care, although differences were not statistically significant (p = 0.15).

The generalizability of the results may be limited by our sample of addiction treatment programs. This study focused on programs that had volunteered to be part of the NIATx 200 project, and these programs may not be representative of treatment programs across the state. On the one hand, volunteering for the NIATx 200 program may represent a signal that programs have already begun to think carefully about organizational processes and quality improvement. On the other, poorly managed programs may have been more motivated to participate in the NIATx 200 program, recognizing that they might make substantial gains in their participation.

Although we found a strong correlation in scores among different scorers of the same interview, we did not perform a “test-retest” evaluation, in which a subset of programs would have been interviewed twice, each time using a different interviewer and different respondent. We did not perform this evaluation because we were concerned that additional requests for information might lead programs to drop out of the NIATx 200 project. However, in their study, Bloom & Van Reenen performed repeat interviews on 64 firms, contacting different managers in the firm and using different interviewers. They found the score from the first interview to be strongly and positively correlated with the score from the second interview (a correlation coefficient of 0.734, p<0.01). Their results suggest that scores from the survey tool are not strongly dependent on the interviewer or respondent.

Implications

Our research is based on an attempt to develop specific indicators of best management practice in substance abuse treatment programs. This research responds to a call by Kimberly & McLellan (2006) for research on the “business” of addiction treatment, and to calls by other prominent researchers for greater attention to the managerial issues in health care generally (Clancy, and Kronin 2004; Shortell, Rundall, and Hsu 2007; Walshe, and Rundall 2001). Our results show that better management practices – particularly in the areas of client intake and retention, data monitoring and quality improvement, and program target setting – are associated with improved client times-to-treatment. Policy-makers need to emphasize that improving patient care is not just about improving treatments – it should also be accomplished through improving the delivery of care. Management practices are an important component of this delivery.

An important message for program managers is that there are managerial techniques that have been successfully adopted in other settings but that probably have not been widely adopted or disseminated within addiction treatment organizations. Managers can look at the fourteen practices we list in the appendix and self-score their program against these. Managers who diligently self-score themselves against the practice grid may get a reasonable evaluation of their organization’s management practices

Our findings on management practices and financial outcomes are weaker but suggestive. Since treatment programs often struggle financially, a useful avenue for future research is to identify more specifically the extent to which better management practices may aid in programs’ financial robustness.

Our survey tool may be useful to researchers for at least 2 reasons. First, the management score may be assessed and used to explore the association between management practice and other outcomes of interest (such as client retention or successful discharge). In addition, the tool may be administered to programs by investigators interested in identifying high (or low) performing programs. This information can be used to stratify programs for analysis or to identify a subset of programs for intervention.

Finally, a mature understanding of the types of practices that are closely associated with programmatic success – broadly defined – may lead to the creation of a roadmap or public description of recommended practices. Dissemination of a set of identified process improvement practices is already a core component of the NIATx program. Extending these promising practices to include areas like data monitoring, programmatic targets, and employee incentives could be an effective way to achieve improved client treatment and outcomes while creating a more financially robust infrastructure of treatment programs.

ACKNOWLEDGMENTS

This research was supported by a grant from National Institute on Drug Abuse (R01 DA020832). We are thankful to Nick Bloom for suggestions on adapting and implementing the management practice survey; to Gretchen Luhr, Marie Shea, and Susan Rosenkranz for their efforts in conducting the survey; to Anna Wheelock for pseudo-patient phone calls; and to Traci Rieckmann, David Gustafson, Alice Pulvermacher, Jay Ford, and Renee Hill for their help and suggestions. Preliminary versions of this work were presented at the AcademyHealth Annual Research Meeting in Washington, DC, in June, 2008, and at the American Society of Health Economists meeting at Duke, NC, in June, 2008. We are especially grateful for the cooperation and collaboration from the participating members of the NIATx 200 project.

APPENDIX A

Management Practices and the Types of Questions Asked

Practice typePractice
Number
PracticeExample of Questions Asked

Intake and Retention1Client flow
process (intake)
Briefly describe the intake process for clients, from first call to enrollment in
treatment.
What have you done to improve the intake process? Please provide specific
examples.

2Client retentionBriefly describe your strategies for helping clients remain in treatment. (E.g., appointment reminder calls, linking with a sponsor, participation incentives.)
Are there quality improvement processes aimed at retention or treatment
completion that have been introduced? Can you give me specific examples?

Quality monitoring and improvement3Continuous improvementDo you have quality improvement systems?
How are your quality improvement processes structured? (E.g., meetings?
QI Staff?)
Describe some of the specific problems that have been addressed.
What is the role of the staff in the process?

4Performance trackingWhat kind of performance indicators (e.g. no-shows, successful discharges,
etc.) do you track?
How are the data collected?
How frequently are these measured? Who gets to see these data?

5Performance reviewHow do you review your performance indicators?
Tell me about a recent meeting
Who is involved in these meetings? Who gets to see the results of this
review?

6Reviewing
agency
performance
When you review your organization’s performance, do you
find that you generally have enough data?
What type of feedback occurs in these meetings?

7Consequence
management
Let’s say you’ve agreed to a plan at one of your meetings. What would happen
if the plan weren’t enacted?
How long is it between when a problem is identified to when it is solved? Can
you give me a recent example?
How do you respond when a team or individual repeatedly fails to carry out
agreed upon actions?

Targets8Target balanceWhat types of goals are set for the program?
What does the board of directors or governing entity emphasize? Financial?
Non-financial?
Tell me about goals that are not set externally (e.g. by the state or federal
government).

9Targets stretchHow tough are your targets? Do you feel pushed by them?
On average, how often would you say that you meet your targets?
Are there any targets which are obviously too easy (will always be met) or
too hard (will never be met)?
Do you feel that on targets that all groups receive the same degree of
difficulty? Do some groups get easy targets?

10Performance clarityIf I asked your staff directly about performance goals or expectations set for
individuals, what would they tell me?
Does anyone complain that the targets are too complex or confusing?
How do people know about their own performance compared to other
people’s performance?

Employee incentives11Rewarding high-performanceAre there any non-financial or financial rewards (bonuses) for top-performers?
If you have a bonus system, how does it work?
How does your reward system compare to other addiction treatment
programs?

12Removing poor performersIf you had a worker who could not do his job what would you do? Could you
give me a recent example?
How long would underperformance be tolerated?
Do you find any employees who just manage to avoid being fixed/fired?

13Promoting high performersTell me about your promotion system.
What about poor performers – do they get promoted more slowly? Are there
any examples you can think of?
How would you identify and develop (i.e. train) your star performers?
If two people both joined the agency 5 years ago and one was much better
than the other would he/she be promoted faster?

14Retaining talentIf you had a star performer who wanted to leave what would your
organization do?
Could you give me an example of a star performers being persuaded to stay
after wanting to leave?
Could you give me an example of a star performer who left the organization
without anyone trying to keep them?

Appendix B

Management Practice Interview Guide and Example Responses for Addiction Treatment Programs

Any score from 1 to 5 can be given, but the scoring guide and examples are only provided for scores of 1, 3 and 5.

(1) Client flow process (intake)
    a. Briefly describe the intake process for clients, from first call to enrollment in treatment.
    b. What have you done to improve the intake process? Please provide specific examples.

Scoring grid:Score 1Score 3Score 5
Program does not use any
policies or process measures that
would improve intake.
Some effort is made to improve
in-take, but these efforts are not
program-wide, and their
effectiveness has not been
evaluated.
Continual efforts to improve in-
take are undertaken. The Plan-
Do-Study-Act cycle is a core
component of the organization

Examples:A program enrolls clients without
any specific methods for making
the intake process as quick,
efficient, and friendly as possible.
An organization has reduced its
paperwork last year, but has
made no further goals. No
apparent emphasis is placed on
making the clients feel welcome
or recognized at the point of
initial contact.
In the last year, an agency has
reduced paperwork, moved to
access-on-demand models and
is working on a plan to improve
retention. The layout of the client
flow process has been changed
to insure that clients are not left
waiting for long periods of time.
PDSA cycles occur on a monthly
basis.

(4) Performance tracking
    c. What kind of performance indicators (e.g. no-shows, successful discharges, etc.) do you track?
    d. How are the data collected?
    e. How frequently are these measured? Who gets to see these data?

Scoring grid:Score 1Score 3Score 5
Measures tracked do not indicate
directly if overall agency
objectives are being met.
Tracking is an ad-hoc process
(certain processes aren’t tracked
at all)
Most key performance indicators
are tracked formally. Tracking is
overseen by senior
management.
Performance is continuously
tracked and communicated, both
formally and informally, to all staff
using a range of visual
management tools.

Examples:One program tracks a range of
measures when the manager
does not think that case load is
sufficient. He last requested
these reports about 8 months
ago and had them printed for a
week until revenues increased
again.
Several key performance
indicators are tracked throughout
the treatment process; however,
this information is not
communicated to clinicians and
other employees.
Key performance indicators are
tracked throughout the treatment
process. These markers are
related to weekly target and other
performance indicators. The
manager meets with the staff
every week to discuss the week
past and the one ahead and uses
monthly company meetings to
present a larger view of the goals
to date and strategic direction of
the agency to employees.

(8) Target balance
    a. What types of goals are set for the program?
    b. What does the board of directors or governing entity emphasize? Financial? Non-financial?
    c. Tell me about goals that are not set externally (e.g. by the state or federal government).

Scoring grid:Score 1Score 3Score 5
Goals are exclusively financial or
operational
Goals include non-financial
targets (such as client
treatment/outcomes), though
they are not reinforced
throughout the rest of
organization.
Goals are a balance of financial
and non-financial targets. Non-
financial targets are considered
more inspiring and challenging
than financials alone.

Examples:Targets are exclusively
operational. Specifically, volume
is the only meaningful objective,
with no targeting of quality
measures, such as access, no-
shows, or retention.
Strategic goals are very
important. They focus on market
share and try to maintain a
reputation for having a high
quality of care. However,
clinicians and administrative staff
are not aware of those targets.
Everyone in the agency is given
a mix of operational and financial
targets. Agency directors
communicate financial and client
outcomes to the employees in a
way they found effective – for
example telling workers they are
having a good week through
program-wide announcements
that are acknowledged and
celebrated by all employees.

(11) Rewarding high-performance
    a. Are there any non-financial or financial rewards (bonuses) for top-performers?
    b. If you have a bonus system, how does it work?
    c. How does your reward system compare to other addiction treatment programs?

Scoring grid:Score 1Score 3Score 5
People within our organization
are rewarded equally irrespective
of performance level.
Our organization has an
evaluation system for the
awarding of performance related
rewards.
We strive to outperform the other
organizations by providing
ambitious stretch targets with
clear performance related
accountability and rewards.

Examples:A program pays its people
equally and regardless of
performance. The management
said to us “there are no
incentives to perform well in our
program”. Even the management
is paid an hourly wage, with no
bonus pay.
A program has an awards system
based on two components: the
individual’s performance and
overall company performance.
A program sets ambitious
targets, rewarded through a
combination of bonuses linked to
performance, team lunches
cooked by management, family
picnics, movie passes and dinner
vouchers at nice local
restaurants. They also motivate
staff to try by giving awards for
perfect attendance, best
suggestion etc.

Appendix C

Regression Results - Dependent Variable is Days-to-Treatment, Independent VariableIinclude Controls and EachIindividual Management Practice Score (1–14).

Practice typePracticeRegression coefficient
Intake and Retention(1) Client flow process
(intake)
−0.156
(0.061)***
(2) Client retention−0.131
(0.064)**
Quality monitoring and
improvement
(3) Continuous
improvement
−0.167
(0.069)**
(4) Performance tracking−0.191
(0.069)***
(5) Performance review−0.100
(0.072)
(6) Reviewing agency
performance
−0.235
(0.075)***
(7) Consequence
management
−0.085
(0.060)
Targets(8) Target balance−0.217
(0.060)***
(9) Targets stretch−0.084
(0.066)
(10) Performance clarity−0.148
(0.066)**
Employee incentives(11) Rewarding high-
performance
−0.100
(0.076)
(12) Removing poor
performers
0.000
(0.069)
(13) Promoting high
performers
−0.080
(0.069)
(14) Retaining talent0.053
(0.089)

Table displays coefficients with program-clustered standard errors in parentheses

*significant at 10%;
**significant at 5%;
***significant at 1%

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

  • Bloom N, Van Reenen J. Measuring and Explaining Management Practices Across Firms and Countries. Quarterly Journal of Economics. 2007;122:1351–1408.
  • Capoccia VAF, Cotter DH, Gustafson EF, Cassidy JH, Ford L, Madden BH, Owens SO, Farnum MD, Molfenter T. Making "stone soup": How process improvement is changing the addiction treatment field. Joint Commission Journal on Quality and Patient Safety. 2007;33:95–103. [PubMed]
  • Clancy C, Kronin K. Evidence-based Decision Making: Global Evidence, Local Decisions. Health Aff (Millwood) 2004;24:151–162. [PubMed]
  • D’Aunno T. Treating drug abuse in America: Results from a study of the outpatient substance abuse treatment system 1988–1995. Ann Arbor, MI: Institute for Social Research, University of Michigan; 1995.
  • D’Aunno T. Treating drug abuse in America: Results from a study of the outpatient substance abuse treatment system, 1988–2000. Ann Arbor, MI: Institute for Social Research, University of Michigan; 2002.
  • Ducharme LJ, Knudsen HK, Roman PM. Evidence-based treatment for opiate-dependent clients: availability, variation, and organizational correlates. Am J Drug Alcohol Abuse. 2006;32:569–576. [PubMed]
  • Ebener P, Kilmer B. Barriers to treatment entry: Case studies of applicants approved to admission. Santa Monica, CA. RAND: Phoenix House/RAND Research Partnership; 2001.
  • French MT, Dunlap LJ, Zarkin GA, McGeary KA, McLellan AT. A structured instrument for estimating the economic cost of drug abuse treatment. The Drug Abuse Treatment Cost Analysis Program (DATCAP) J Subst Abuse Treat. 1997;14:445–455. [PubMed]
  • French MT, Salome HJ, Carney M. Using the DATCAP and ASI to estimate the costs and benefits of residential addiction treatment in the State of Washington. Soc Sci Med. 2002;55:2267–2282. [PubMed]
  • Hoffman K, Ford JH, Choi D, Gustafson DH, McCarty D. Replication and sustainability of improved access and retention within the Network for the Improvement of Addiction Treatment. Drug and Alcohol Dependence. 2008;98:63–69. [PMC free article] [PubMed]
  • Institute of Medicine. Managing managed care: Quality improvement in behavioral health. Washington, DC: 1997.
  • Institute of Medicine. Bridging the gap between practice and research: Forging partnerships with community-based drug and alcohol treatment. Washington, DC: 1998. [PubMed]
  • Institute of Medicine. Crossing the quality chasm in mental and substance use treatment. Washington, DC: 2005.
  • Institute of Medicine. Improving the Quality of Health Care for Mental and Substance-Use Disorders: Quality Chasm Series. Washington, DC: 2006. [PubMed]
  • Kimberly JR, McLellan AT. The business of addiction treatment: A research agenda. J Subst Abuse Treat. 2006;31:213–219. [PubMed]
  • Knudsen HK, Johnson JA, Roman PM. Retaining counseling staff at substance abuse treatment centers: effects of management practices. J Subst Abuse Treat. 2003;24:129–135. [PubMed]
  • McCarty D, Gustafson D, Capoccia VA, Cotter F. Improving Care for the Treatment of Alcohol and Drug Disorders [Forthcoming] J Behav Health Serv Res. 2008 [Epub ahead of print] [PMC free article] [PubMed]
  • McCarty D, Gustafson DH, Wisdom JP, Ford J, Choi D, Molfenter T, Capoccia V, Cotter F. The Network for the Improvement of Addiction Treatment (NIATx): enhancing access and retention. Drug Alcohol Depend. 2007;88:138–145. [PMC free article] [PubMed]
  • McLellan AT, Carise D, Kleber HD. Can the national addiction treatment infrastructure support the public's demand for quality care? J Subst Abuse Treat. 2003;25:117–121. [PubMed]
  • National Committee for Quality Assurance. The State of Health Care Quality: Industry Trends and Analysis. Washington, DC: National Committee on Quality Assurance; 2007.
  • Richardson HA, Vandenberg RJ, Blum TC, Roman PM. Does decentralization make a difference for the organization? An examination of the boundary conditions circumscribing decentralized decision-making and organizational financial performance. Journal of Management. 2002;28:217–244.
  • Roebuck MC, French MT, McLellan AT. DATStats: results from 85 studies using the Drug Abuse Treatment Cost Analysis Program. J Subst Abuse Treat. 2003;25:51–57. [PubMed]
  • Roman PM, Ducharme LJ, Knudsen HK. Patterns of organization and management in private and public substance abuse treatment programs. J Subst Abuse Treat. 2006;31:235–243. [PubMed]
  • Shortell SM, Rundall TG, Hsu J. Improving Patient Care by Linking Evidence-Based Medicine and Evidence-Based Management. Jama. 2007;298:673–676. [PubMed]
  • Substance Abuse and Mental Health Services Administration. National Survey of Substance Abuse Treatment Services (N-SSATS): 2006. Data on Substance Abuse Treatment Facilities. Rockville, MD: DHHS Publication No. (SMA) 07-4296; 2007.
  • Walshe K, Rundall TG. Evidence-based management: from theory to practice in health care. Milbank Q. 2001;79:429–457. IV–V. [PubMed]