Study objectives and hypotheses
The primary research question of NIATx 200 is to determine which of four collaborative service combinations produces the greatest improvement in waiting time (days to treatment from first contact), rates of admissions to treatment, and continuation in treatment. The secondary research question is: What is the impact of the study interventions on treatment completion rates, the level of adoption and sustainability of the recommended practices, organizational readiness to adopt and sustain the new practices, voluntary employee turnover, and program margin? Interventions were delivered to and outcome analyses are performed at the program level.
We are also examining the cost of delivering each combination of services. The intent of the cost analysis is to assess the cost of disseminating different QI methods to state agencies that regulate substance abuse treatment. Our budget did not allow us to assess the cost to the treatment programs of implementing the services.
NIATx 200 is a cluster-randomized trial with four interventions: interest circle calls, coaching, learning sessions, and the combination of interest circle calls, coaching, and learning sessions. Figure shows the study's organization and design.
The organization and design of the NIATx 200 cluster-randomized trial.
Interest circle calls are regularly scheduled (monthly in our case) teleconferences in which change team members from different programs discuss issues and progress and get advice from experts and one another. Interest circle calls are inexpensive and provide a regular meeting time for members to continue collaborating. If interest circle calls produce QIs, then change can be made widely at relatively low cost. But the quality of interest circle calls may vary by facilitator, and because the teleconferences are scheduled at particular times, they sometimes compete with other priorities, limiting participation. Although participants can listen to recordings of calls they miss, they lose the chance to interact.
Coaching assigns a process improvement expert to work with program leaders and change teams to help them make and sustain process improvements. In our study, coaching involved one site visit, monthly phone conferences, and e-mail correspondence. Coaches give a program ongoing access to an expert who tailors advice to the program and makes contacts with other experts and programs that have already addressed the same issue. However, coaching is expensive, and the match between program and coach may not be ideal. Program staff may also miss the camaraderie that comes from learning sessions and interest circle calls. Just as facilitator quality in interest circle calls may vary, so may the quality of coaching.
Learning sessions occur periodically-in our case, twice a year. These face-to-face multi-day conferences brought together program change teams to learn and gather support from outside experts and one another. Participants learn what changes to make and develop skills in QI (e.g., creating business cases for improvements). Learning sessions raise interest in making changes and provide opportunities for program staff to share plans and progress. The sessions promote peer learning, increase accountability and competition, and give program staff the time to focus and plan as a team without distraction. Learning sessions also are costly, and the knowledge and excitement they produce can fade.
A common form of learning collaborative involves a combination of these services and is the final study intervention. A combination of interest circle calls, coaching, and learning sessions provides continuity and reinforcement over time and offers options for the way program staff can give and receive help. One would assume that this intervention would have the greatest effectiveness. But the combination is expensive and risks delivering inconsistent messages from different leaders and facilitators. The combination also risks exerting too much external pressure, thus reducing intrinsic motivation.
The protocol called for all four interventions to have the same goals during the three six-month intervention periods. For example, each program concentrated on reducing waiting time during months 1-6 of their involvement. During each intervention period, programs chose which practices to implement from those shown in Table . All interventions had access to the same web-based learning kit, which contained specific steps to follow, tools (e.g., the walk-through, flow charting), ways to measure change, case studies, and other features. What varied were the methods of instruction and support that were wrapped around the learning kit: interest circle calls, coaching, learning sessions, or the combination of all three.
NIATx 200 aims and promising practices
The study received approval from the institutional review boards at the University of Wisconsin-Madison and Oregon Health and Science University and is registered at ClinicalTrials.gov (NCT00996645).
Research team and study sites
The Center for Health Enhancement Systems Studies (CHESS), located at the University of Wisconsin-Madison, is a multidisciplinary team addressing organizational change to improve healthcare. NIATx is part of CHESS. The Center for Substance Abuse Research and Policy (at Oregon Health and Science University) works at the nexus of policy, practice, and research for the treatment of alcohol and drug dependence to improve evidence-based addiction treatment. The Health Economics Research Group (HERG) at the University of Miami conducts research on the economics of substance abuse treatment and prevention, HIV/AIDS, criminal justice programs, and health system changes. This core research team worked with five states through each state's single state agency (SSA) (the authority for substance-abuse treatment at the state level).
SSA administrative systems identified eligible programs. To be eligible to participate in NIATx 200, programs had to have at least 60 admissions per year to outpatient or intensive outpatient levels of care as defined by the American Society of Addiction Medicine (ASAM) and have received some public funding in the past year. Programs that had worked with NIATx in the past were excluded from participating. (Before the start of this study, a number of programs-fewer than 100 nationwide-had worked with NIATx.) All clients treated within an eligible program were deemed eligible and included in the analysis. SSAs and state provider associations promoted the study and helped recruit participants. Before randomization, NIATx assessed programs' readiness for change and management strength, and asked about which other treatment programs most influenced their own operations. Then programs were randomized to study interventions.
NIATx 200 has three primary outcomes: waiting time (days from first contact to first treatment), annual program admissions, and continuation in treatment through the first four treatment sessions. Data for the waiting time and continuation outcomes come from patient information collected, aggregated, and sent in by the SSAs at approximately 9, 18, and 27 months after the start of the intervention. Annual program admissions and other secondary outcomes (including voluntary employee turnover, treatment completion, and operating margin) were collected through surveys of executive directors conducted at baseline, mid-intervention, and project completion. The research team also surveyed staff members at the treatment programs and is using other measures, described below, to understand the role of mediators, moderators, and other factors that contribute to an organization's success in making changes. As others have demonstrated, QI efforts are much more likely to improve quality if they take place in a supportive context [40
The Organizational Change Manager (OCM) measures an organization's readiness for change. Staff members at treatment programs completed the OCM. The OCM had good inter-rater reliability among respondents in field tests [42
Organizations adopt many changes, but sustain few [43
]. The 10-factor multi-attribute British National Health Services Sustainability Index is being used to predict and explain the sustainability of promising practices that programs implemented. A research trial involving 250 experts in healthcare policy and delivery, organizational change, and evaluation validated the model, which explained 61% of the variance in the sustainability of improvement projects [43
The management survey measured 14 management practices at the beginning of the study. The survey was based on the instrument developed by Bloom and Van Reenan [44
]. The published results indicate that good management practice is associated with shorter waiting time, weakly associated with revenues per employee, and not correlated with operating margins. Better management practices were more prevalent in programs with a higher number of competitors in the catchment area [45
The Drug Abuse Treatment Cost Analysis Program (DATCAP) is a data-collection instrument and interview guide that measures both direct expenses and opportunity costs. Although DATCAP was initially used in the field of drug abuse treatment, the instrument is now used in treatment programs in many social-service settings. DATCAP was modified for this study to capture the economic costs to an SSA of developing and providing services [46
Power calculations were predicated on the idea that the unit of analysis is the program rather than the client. Power was calculated for various sample sizes, with consideration given to anticipated recruitment levels in each state. An attrition rate of 20% was assumed in the sample size calculations. It was determined that a sample size of 200 programs would provide 80% power to detect a difference of 3.2 days in waiting time, 7.5% difference in continuation, and 14.2% difference in the log of admissions with alpha = 0.05. These levels of improvement for each outcome were deemed to be clinically or organizationally meaningful.
The study design calls for nesting of agencies within states; as such, randomization of programs took place state by state. Though recruitment took place over a period of several months, all programs were randomized at a single point in time at the end of the recruitment period in each state. The randomization was stratified by program size and a quality-of-management score generated during a baseline interview with program leaders [45
]. The project statistician generated the allocation sequence. The University of Wisconsin research team enrolled participants and assigned participants to interventions. Assignments to interventions were made using a computerized random number generator. Multiple programs within the same organization were assigned to the same intervention to avoid contamination. Neither the participants nor the study team were blind to the assignments.
The five states participating in NIATx 200 were divided into two cohorts. Cohort one had three states; cohort two had two states. For each cohort, randomization took place in the two to three weeks before the first six-month intervention began. See Table . Baseline data were gathered in a period of up to three months before randomization.
The state authorities recruited programs to participate in NIATx 200. They promoted the study at meetings and by word of mouth, and wrote letters to the CEOs of programs. They also conducted meetings so the leaders of eligible programs could learn more about the study. At these meetings, the research team and SSA directors explained that programs would use one of four methods to improve processes that affect access and retention. Peer programs with NIATx experience explained the improvements they had made using the same methods and showed changes in data that resulted from the improvements. SSA directors outlined the benefits and responsibilities of programs in the study. Programs would gather pretest data, be randomly assigned to one of four study interventions, and receive 18 months of support. During months 1 to 6, they would focus on reducing waiting time; in months 7 to 12, increasing continuation rates; and in months 13 to 18, increasing admissions. Afterward, the program could join state-led activities to sustain changes. The SSA would send client data to the researchers about waiting time, admissions, continuation, and treatment completion. The program CEO would name from the staff an influential change leader. The leader and staff at each program would complete surveys during the pretest period and at 9, 18, and 27 months. These surveys addressed employee turnover, new practices initiated, number of employees, revenue, and operating margin. Staff would complete surveys about how the program makes and sustains changes. The program would receive minimal compensation for reporting these data.
The treatment program comprises both the unit of randomization and the unit of analysis in the study. For the primary analysis, the protocol calls for us to aggregate client records to compute monthly averages for each program's waiting time and continuation rates. The unit of analysis will be a vector of program-month results based on these aggregated values. We will fit a mixed-effect regression model to these monthly observations, including terms to isolate state and intervention effects. We are aggregating admissions to a yearly level to compensate for seasonality. We will use random effects to model the correlation among outcomes from the same program. Organization-level random effects will be included to model the correlation among programs within the same organization. Interventions will be ordered by the cost of implementation and compared in a pair-wise fashion to the lowest cost intervention (interest circle calls). All randomized programs with available outcome data will be included in the analysis according to the intent-to-treat principle. The analysis will be conducted by originally assigned intervention regardless of how much programs participated in the learning services. Organizational covariates accounted for in the analysis will include program size, management score, and state affiliation.
Illustration of a program participating in NIATx 200
The following scenario shows what a program assigned to the coaching intervention might have experienced. Assignment to the coaching intervention meant that the program had a process improvement coach visit at the beginning of the study, call every month for the next 18 months, and communicate via e-mail.
A program began its work once the CEO named a change leader and change team. This group learned from the research team in the first week of their participation how to do a walk-through, which is a tool that shows staff members what it is like to experience dealing with the program as a client does. The program also had a coach assigned to it by the research team. The coach phoned the CEO, change leader, and change team members to introduce herself and set an agenda for a site visit. The coach also reviewed with the change team the results of the walk-through, an experience that revealed many issues to the change team, including long waiting times. The coach also encouraged change team members to examine case studies on the study website.
The site visit allowed the coach and change team members to get to know each other. The coach explained evidence-based improvement principles to the change team and different ways of reducing waiting time, the goal of the study in the first six months. The change team had to decide on one change to make first to reach this goal. In this example, the program decided to adopt a change they learned about from a case study on the study website: Eliminate appointments and instead invite callers to walk in the next morning, complete intake and assessment, and start treatment by noon. Among other things, the change team learned that making this change had resulted in the outpatient program's significantly increasing its revenue. Using information from the case study, the coach helped the change team review in detail how the program made this change-what data they collected, what steps they took, and what protocols they used to train staff for handling the high volume of walk-ins. Finally, the coach helped the change team figure out how to collect pretest data and start the rapid-cycle change process. In two weeks, the program would have enough pretest data (on about 25 clients) to start the change process.
Once pretest data were collected, the first rapid-cycle change (or plan-do-study-act cycle) began. The first three callers on Monday were invited to come anytime the next morning to be seen right away. Two callers jumped at the chance. One had to work but offered to come after work. The change leader (who was the program's medical director) did the intake and assessment with one client to experience the new process. The clinician did the same with the other client. At the end of the day, the medical director and clinician modified the change to allow walk-ins at the end of the day too. The next version of the change started Thursday and involved the first five people who called and two clinicians. After this change, the staff identified additional concerns and made other adaptations. Throughout the rapid-cycle changes, the change leader worked side-by-side with clinical staff (a key part of the strategy) to understand problems and make modifications. After staff discussions, the group decided on one last change. Now the new process would be tested with anyone who called for the next week (as medical director, the change leader had the authority to implement changes). In the space of three weeks, the overall goal of taking walk-ins was achieved by making and adapting small changes several times, until the process worked well. As a result, the program began serving more clients without adding staff.
During the remainder of the six months when the goal was to reduce time to treatment, the program continued to introduce and refine other changes (see the possibilities in Table ), using the rapid-cycle change model. Starting in month seven, the goal changed to increasing continuation in treatment. The program did another walk-through to initiate a new series of changes to achieve this goal.