|Home | About | Journals | Submit | Contact Us | Français|
The National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN) is intended to test promising drug abuse treatment models in multi-site clinical trials, and to support adoption of new interventions into clinical practice. Using qualitative research methods we studied adoption in the context of two multi-site clinical trials, one outside the CTN and one within the CTN. A total of 71 participants, representing 8 organizational levels ranging from clinic staff to clinical trial leaders, were interviewed about their role in the clinical trial, its interactions with clinics, and intervention adoption. Drawing on conceptual themes identified in these interviews, we report strategies that could be applied in planning, development and implementation of multi-site studies to better support adoption of tested interventions in study clinics after the trial has ended. Planning for adoption in the early stages of protocol development will enhance integration of new interventions into practice.
The 1998 Institute of Medicine Report (Lamb, Greenlick, McCarty, 1998) is a touchstone for dissemination and adoption of evidence-based practices in drug abuse treatment. This report framed the practice-research “gap” as a metaphor for the low rate of collaboration between researchers and community-based clinics, and it outlined the resulting failures: treatment research often does not address problems of interest to treatment providers, and treatment providers often do not apply research-based interventions in their practice. It offered recommendations designed to improve clinical practices and to increase the value of research to providers, clinicians, and consumers of substance abuse treatment.
First among these recommendations was that the Center for Substance Abuse Treatment (CSAT) and the National Institute on Drug Abuse (NIDA) develop an infrastructure to support research in community-based treatment programs (Lamb et al., 1998). This would bring treatment research out of university clinic settings and into more usual practice settings, and would involve community-based programs in shaping research protocols responsive to their needs. CSAT responded by developing a network of Practice Improvement Collaboratives to support practice-research collaboration at local or regional levels (Clark, 2002), and NIDA responded through the development of the Clinical Trials Network (CTN) starting in 1999 (O’Connor, 2001).
The NIDA CTN aims to improve drug abuse treatment through two goals. First, to determine effectiveness of promising interventions in multi-site clinical trials and second, to support the transfer of tested and effective interventions into clinical practice (Hansen, Leshner & Tai, 2002). NIDA has sustained commitment to these goals over the past 7 years, at a cost of approximately 40 million dollars per year (Betty Tai, personal communication).
In terms of testing promising interventions, the CTN has made progress. The network includes 17 research centers and over 100 Clinical Treatment Programs. More than 7,0 00 participants have been enrolled into a series of 21 multi-site research protocols in various stages of completion (CTN Bulletin, April 5, 2006). Findings are now available for randomized trials of buprenorphine (Ling et al., 2005), contingency management (Petry et al., 2005, Pierce et al., 2006), and motivational interviewing (Carroll, Ball, Nich et al., 2006), and there are a number of papers commenting on the CTN in other ways (e.g., Amass et al., 2004; Ball et al., 2002; Forman, Bovasso & Woody, 2001; Marinelli-Casey, Domier & Rawson, 2002; Polcin, 2004; Rawson, Marinelli-Casey & Ling, 2002).
In terms of supporting transfer of tested interventions into practice, CTN progress is less clear. The CTN has developed cooperative efforts with the national ATTC network, as well as a process for dissemination of CTN findings through this network. The CTN has also established internal committees concerned with dissemination and utilization of findings in the scientific and practice communities. This approach to technology transfer is traditional in that the clinical trials are concerned only with effectiveness, and dissemination occurs only after the trial is completed.
A different approach to technology transfer is to incorporate planning for dissemination into the multi-site trial itself (Guydish et al., 2005). The rationale is straightforward. Clinics participating in clinical trials will have met some conditions associated with adoption of the intervention. Through its participation in the trial, the clinic has knowledge of the intervention, has formed an attitude toward the intervention, has made a decision to implement the intervention, and has implemented the intervention for study purposes, representing four of the five stages in the innovation-decision process (Rogers, 1995a; 2002). Through its participation in the trial, the clinic will have experience with the trialability, compatibility and complexity of the intervention, and can make judgments concerning its relative advantage, all characteristics of the intervention associated with adoption (Rogers 1995a). Although often not the case (Guydish et al, 2005; Fals-Stewart, Logsdon & Birchler, 2004), adoption of new treatments seems most likely in clinics that were involved in testing those treatments. Conversely, if an effective new intervention is not adopted in clinics where it was tested, then its adoption in other settings seems less likely.
Other researchers have described general research to practice issues (Brown & Flynn, 2002; Hamilton Brown, 2004; Rawson, McCann, Huber, Marinelli-Casey, and Williams, 2000; Simpson, 2002; Sloboda & Schildhaus, 2002). We studied adoption of research-based interventions in the context of multi-site randomized clinical trial (RCT) research (Guydish et al., 2005; Guydish, Manser, Tajima & Jessup, 2006). In a previous paper we offered a “spaceship” as a metaphor to describe the experience of some clinics when they participate in a randomized clinical trial (RCT) (Guydish et al., 2005). “Spaceship RCT” descends on the pre-existing clinic setting and infuses it with intervention manuals, training and supervision, with what may be alien research protocols and procedures, and with temporary funding to hire research and clinical staff. The metaphor is not only about what the RCT brings to the clinic, but also what it takes away. When the clinical trial ends and “spaceship RCT” departs, it takes with it the training, supervision, and often the staff who were trained in the intervention. In the present paper we ask: How might the technology of multi-site clinical trials be modified to better support adoption of tested interventions? Reflecting the voices of participants in two large multi-site clinical trials, we identify six strategies to counteract the departure of the spaceship RCT, and the attendant loss of supports for the intervention.
Using qualitative methods we studied adoption of interventions in the context of two multi-site clinical trials, the Methamphetamine Treatment Project (MTP) and the CTN Motivational Interviewing and Motivational Enhancement Treatment (MI/MET) protocols. We used a Multi-level Assessment Protocol (MAP) approach to identify and interview key informants at all levels of the multi-site clinical trials structure.
Matrix is a manualized intensive outpatient intervention designed to treat stimulant abuse. Although the format has changed over time, the current Matrix model is delivered primarily in group sessions conducted 3 times per week over 16 weeks. Sessions address early recovery skills, relapse prevention, and family education, and the program emphasizes participation in self-help meetings and urinalysis testing (Obert et al., 2000). Early uncontrolled trials showed favorable outcomes in treating cocaine use (Rawson, Obert, McCann, and Mann, 1986; Rawson, Obert, McCann, and Ling, 1991; Rawson et al., 1995), and suggested that Matrix may work well for methamphetamine users (Huber et al., 1997). To address the growing methamphetamine epidemic, CSAT developed the MTP study to test the effectiveness of Matrix in the treatment of methamphetamine abuse (Herrell, Taylor, Gallagher, and Dawud-Nouri., 2000).
In the MTP study, across 8 participating clinics, 978 participants were randomly assigned to a 16 week Matrix condition or to treatment-as-usual, and followed to 12 months post admission (Rawson et al., 2004). MTP procedures and comparison conditions are reported in Huber et al. (2000) and Galloway et al. (2000). Results showed better treatment attendance and retention, and fewer methamphetamine positive urinalyses during treatment, but no differences between groups at treatment discharge or 6 month follow up (Rawson et al., 2004).
We studied the MTP because it foreshadowed the work of the CTN. It was designed as an effectiveness trial for a promising psychosocial intervention (Matrix), it was directed by a steering committee composed of stakeholders, and supported by a coordinating center. Although independent of the CTN, the MTP study formed a ready testing ground on which to explore intervention adoption among clinic partners in the time period after the study ends.
Motivational Interviewing (MI) and Motivational Enhancement Treatment (MET) are designed to enhance motivation for change among substance abuse clients. While MI represents a broader therapeutic approach, MET includes specific emphasis on personalized assessment, feedback, and change plans. MI/MET address client ambivalence toward change, and are guided by the fundamental components of collaboration between interviewer and client, the presumption that the resources for change reside in the client, and client autonomy in self-direction (Miller and Rollnick, 2002). There is a developed literature supporting the efficacy of MI (Burke, Arkowitz & Menchola, 2003; Miller & Willbourne, 2002), although a recent study failed to show better outcomes for MI compared to standard treatment (Miller, Yahne & Tonigan, 2003).
In developing the MI/MET trial, the planning team confronted a real-world problem. Although MI/MET is an individual counseling intervention, many clinics rely almost exclusively on a group treatment format. To address this, two similar protocols were developed. The MET protocol, used in programs having individual counseling, included 3 individual sessions of MET compared to 3 individual sessions of standard care. In the MI protocol, used in programs that relied on group treatment but conducted the initial assessment as an individual session, a single MI session was attached to the initial assessment, and this MI-enhanced assessment condition was compared to standard assessment (Carroll et al., 2002).
A total of 11 sites participated, with 5 sites testing MI and 6 sites testing MET. MI was associated with greater treatment retention at the 28 day assessment, but substance abuse outcomes did not differ between groups on follow up (Carroll et al., 2006). Findings for the MET study are not yet available. In our study of adoption, for logistic and cost reasons, we interviewed program directors and staff at 3 MI study sites and 2 MET sites in the Western U.S.
We used a qualitative approach to study adoption in an organizational context, and respondents were selected to reflect the broad organization supporting each trial. Organizational structures supporting the MTP and MI/MET trials were similar, but not the same, and it was necessary to identify organizational levels in each study according to their general function.
Both the MTP and MI/MET studies included an intervention developer and clinical trial funder, and both included clinic directors and clinic staff involved in the trial. These organizational levels were directly comparable across studies. Other organizational levels, however, differed. In the MTP trial, executive functions of planning, decision-making, and implementation oversight occurred through a Steering Committee including representatives from each site, from the coordinating center, and from CSAT. A coordinating center was responsible for training, supervision, and monitoring of therapists and research assistants, and for data collection, management and analysis. Investigators and evaluators at each site had local implementation and data collection responsibilities (Herrell et al., 2000).
The organizational structure supporting the MI/MET clinical trial was somewhat different. Although there is a CTN National Steering Committee, this committee is not closely involved in the implementation of any single trial. In the CTN, executive functions of planning, decision-making, and oversight for any given trial are conducted by a protocol team. This team includes a Principal Investigator from a single node, called the “lead node,” and interested investigators from other nodes participating in the trial. Coordinating functions related to staff training and supervision, and data collection and management, are conducted by the lead node. The CTN has historically valued “bidirectional” collaboration between researchers and clinical providers, and both are well-represented in all CTN activities. In the MI/MET protocols, clinical providers were represented in the protocol team and in coordinating work of the lead node. In general, the activities of the MTP Steering Committee, Coordinating Center and Site Investigators were most similar to the activities of the MI/MET Protocol Team Leaders and Node Level Investigators. Last, the MTP trial included site-level evaluators while the MI/MET trial did not, and the MI/MET trial included clinic-level supervisors while the MTP trial did not. These MTP and MI/MET organizational levels are summarized in Table 1, and were used to select respondents representing the different organizational levels in each case.
Individuals at all organizational levels were identified by the study team. In the case of the MTP study, the authors (JG, SM) represented one study site and so were knowledgeable about the MTP organization. In the case of the MI/MET protocols, the study team used the written study protocol and other CTN documents to identify members of lead node and protocol team. Many participants were easily identifiable, for example, the person who designed the intervention, the project officers who represented the funding agency, and directors of individual clinics where the study was implemented. In cases where individuals had roles in more than one organizational level, participants were selected using two criteria. First, that they were not already interviewed as a member of a different organizational level and, second, that they had a leadership role in the study. To select clinical staff respondents, clinic directors identified staff who were involved in the clinical trial and were still with the clinic or, if they had left the clinic, for whom contact information was available. In the MTP study group, 44 interviews were planned and 42 were completed, and the missed interviews were for two counselors from the same program (both had left the clinic and could not be located). In the MI/MET study group 31 interviews were planned and 29 were completed, and the missed interviews were for a clinical supervisor who did not respond to recruitment efforts, and a node-level investigator who declined participation.
Among the 71 participants interviewed, 42 represented the MTP trial and 29 represented the MI/MET trial. MTP Participants included 23 women and 19 men; 14 had doctoral degrees, 17 had master degrees, 6 had bachelor degrees, 1 had high school education only, and educational status was unknown for 4 participants. MI/MET participants included 14 women and 15 men; 2 had MD degrees, 8 had doctoral degrees, 9 had master degrees, 2 had bachelor degrees, 3 had some college, and educational status was unknown for 5 participants.
Initial contact was by mail, with telephone follow up to assess willingness to participate and to schedule interviews. Informed consent procedures were completed prior to each interview. Participants who had left their clinical site by the time of the study interview were located and contacted using the same procedures. For each clinic, our goal was to interview one counselor who had provided the Matrix or MI/MET intervention and one who had provided the treatment-as-usual intervention.
Semi-structured interview guides (available from first author) were developed by the study team, informed by organizational theory and reflecting domains that can influence adoption of interventions (see Guydish et al., 2005). These include, for example, organizational structure and culture (Burke & Litwin, 1992; Lamb et al., 1998), organizational readiness for change (Backer, 1995), perception of the intervention (Backer 1991, Rogers 1995a), and resources (Backer 1991; Rogers, 1995b). Interview guides included questions concerning the respondent’s role in the clinical trial, perspectives on how the trial impacted the clinic, and intervention adoption. Interviews were audiotaped and most were conducted in person. Six MTP interviews and four MI/MET interviews were conducted by phone.
The study plan was to interview participants close in time to completion of the clinical trial, but also after some time had elapsed in which clinics could have considered whether to continue to provide the treatment. MTP study recruitment was completed in different clinics at different times, but was completed in all clinics in July 2001, and delivery of the Matrix intervention for study purposes ended in November 2001. Interviews for the current adoption study were conducted between January and December 2002 and occurred, in any given clinic, from 2 to 12 months after the clinic had stopped providing Matrix for research purposes. MI/MET study recruitment was completed in all clinics in February 2003, and study-related intervention was completed one month later. Most interviews for the current adoption study were conducted between February 2003 and June 2004 and occurred, in any given clinic, from 4 to 13 months after the clinic had stopped providing MI/MET for research purposes.
Clinics received $1000 for study participation. At the discretion of the clinic director this amount was paid to the clinic (12 clinics) or distributed to respondents within the clinic (1 clinic). Respondents who were formerly employed by a clinic but who had since left the clinic could not benefit from clinic reimbursement, so they received a cash reimbursement of $50. Financial incentives were not offered to respondents outside the clinic-level (e.g., intervention designer, funder, steering committee member), as these respondents were remunerated for their efforts in the context of the study award. All study procedures were approved by the University of California, San Francisco, institutional review board.
Analysis was conducted using a theoretical analytic framework (Bulmer, 1979) derived from literature on organizational functioning and change theory (Backer 1991; 1995; Burke & Litwin, 1992; Lamb et al., 1998; Rogers 1995a, 1995b). The framework allowed for use of key domains as analytic categories to examine participant perspectives during and after participation in the clinical trial. Analytic categories included organizational structure and culture, readiness for change, attitudes toward research, perception of intervention, resources, dissemination of study results and reinvention. Closed codes were developed using these categories and codes were added as content analysis proceeded (Boyle, 1991).
Transcribed interviews were coded using ATLAS.ti.™, a qualitative analysis program. Consistency between raters was supported by coding the first 14 interviews as a team to obtain agreement, and then having the primary coders (BT, SM) independently code 5 interviews, with a review for consistency by a third team member (MJ). Each of the remaining 52 interviews was coded by both primary coders, meaning that each interview was coded twice, and these two sets of codes were merged prior to analysis. Based on the MTP interviews a total of 69 codes emerged and were applied to each transcript. The codebook was modified based on the later MI/MET interviews, and the final MI/MET codebook included 64 codes. Data attached to each code were discussed by team members in weekly review meetings, where coding questions could be discussed and resolved. Resolution of coding questions typically involved discussion and agreement on the meaning and limits of a particular code, assignment of the text in question to a particular code or to multiple codes and, when the team discovered meaningful data outside the existing coding scheme, the addition of a new code. In the context of collaborative coding and an evolving codebook, inter-rater reliability was not measured. Instead, analytic memos, constant comparison and on-going discussion of the data, and member checks were applied to ensure trustworthiness of the data. Simultaneous data collection and analysis supported dependability and, in the interpretation phase, reflexivity of team members regarding participant narratives was used to enhance trustworthiness (Creswell 1994; Lipson 1991; Lincoln & Guba 1985).
Our interest here is to describe strategies, based on qualitative interviews and analyses, which may support the adoption of tested interventions in the wake of multi-site clinical trials. These include planning for adoption when planning the trial, training senior clinical staff to deliver the experimental intervention, use of regional training and local supervision models, and bringing back to the clinic both intervention training and study findings in the post-study period. The clinical trial (MTP or MI/MET) and the organizational level of respondents are given after each comment quoted below. Where respondents could be identified due to small numbers within an organizational level (intervention designed, clinical trial funder, steering committee member or protocol team leader), the level is given as “elite” in order to preserve anonymity.
In planning a multi-site clinical trial, the goal is to reliably determine effectiveness of the intervention. Whether or not the intervention is adopted by participating clinics, or even adopted more widely in the field, is not a usual concern of clinical trials. The general expectation is that interventions, once effectiveness is demonstrated, will be adopted into practice. As one participant commented:
The simple truth is that there wasn’t a whole lot of thought in the beginning, of any of our studies, about what happens next. The presumption has been, “We’ll find truth out there, and we’ll publish the truth, and we’ll let people know about it, and something will magically happen, and – and the truth will be used….There wasn’t, there still isn’t a plan for “once we find out what works, how do we put it into better practice?” It just hasn’t been thought through thoroughly (MTP Elite).
It is also possible to plan the clinical trial taking into consideration the challenges of dissemination and adoption, and the likelihood of adoption once the trial ends, as reflected in the comment below.
… going into it, my assumption of what was the best way to make dissemination work was that you trained either the clinical director, or assistant clinical director, or someone who … is in a position to really model this to the staff that they supervise, and also supervise them from that standpoint. And if you could get that person trained, and they’re on-site and they were the one who was then supervising, then you might have a higher likelihood of, once this protocol vanishes from their program, that maybe you’d have it continue, because people saw what it could do, and still have that leader in place who was able to supervise from that standpoint. (MI/MET Elite).
In the trials we studied, clinical staff were trained to deliver either Matrix or MI/MET. Although specific protocols differed, staff in both studies were trained to criterion, supervised in delivering the intervention, and their performance was monitored for fidelity. Once study recruitment began, these trained staff continued to develop proficiency through practice. Some respondents observed that it takes a period of months to gain proficiency in delivering either Matrix or MI/MET. By the end of the study, these trained and proficient staff members represent a valuable resource to any clinic intending to adopt the intervention.
Where clinics regard the research protocol as a separate contract to be served for a limited time, they may hire new staff to deliver the intervention. In the comment below, the respondent observes that funding for these positions ends when the study ends, and the trained and proficient staff leave the agency.
…you staff the research study with other people that come in …the grant ends, and they go away. …They as practitioners may take … what they learned in here…off someplace else… (MTP Site Principal Investigator)
Training senior clinical staff to deliver the new intervention builds capacity within the agency, and increases the likelihood that staff trained in the interventions will remain in the clinic once the trial has ended.
So the purpose of the trial is to test the effectiveness of the intervention. But the benefit to the treatment program is, again, trialability. They can try out the therapy. They now have a cohort of clinicians who are trained in the therapy, who can be leaders within their organization and say, “guys, this really worked for me. I had a difficult patient, this is what happened.” And that would be external to the clinical trial itself… once it’s over, I think that’s a resource that’s left within the agency. (MI/MET Elite)
Clinician training was centralized in the MTP study, so that clinicians traveled to southern California for Matrix training and certification. In the MI/MET study clinician training was decentralized, so that the two CTN nodes we studied had regional trainers located in Oregon and California, and clinician trainings were conducted with small groups of staff in those regions. In the excerpts below, an MI/MET respondent describes the train-the-trainer model, and how a decentralized training approach could support sustainability after the study ended.
The reason we were able to do it and the way the training worked in a decentralized model, was that we were able to tap into [a] network of trainers that already existed.
Usually the model would be - have some expert trainer someplace, and bring everybody in and train them and send them back out again…. But what we were able to do is find these sites, say, all right, do you have somebody who’s already been through [MI] training or has done training in MI. … And they all could find somebody. Which was remarkable….So there was already this network that we’d tapped into.
[William Miller] has been doing this motivational interviewing network of trainer thing for years and years. … once a year he offers this big training of trainers… and so they’re everywhere and it’s a world-wide network. So, being able to tap into that made a huge difference… in terms of … sustainability and really running things at a node level, and then that trainer was available to train the [clinical] supervisor. Because we also wanted some durability of the training- we wanted durability of resources, and that [clinical] supervisor is there as a training resource, supervision resource… it remains to be seen, but the idea [is] that it’s a model that builds in sustainability if you’ve got somebody there who can supervise the treatment …
So it was set, because … all the training materials were available, the network was available, the trainers were available, there was enough - word on the streets where enough people had been through workshops that the buy-in was there… (MI/MET elite).
The clinical trials we studied applied different models of clinical supervision for the intervention under study. In the MTP study, supervision of counselors providing the Matrix intervention in separate sites was centralized and conducted by phone. In the MI/MET study, local supervision was conducted by trained supervisors in each clinic. In the comment below an MI/MET respondent offers a rationale for the use of a decentralized, local supervision model.
A reason for training the therapists of the agency itself is, you hope that after the trial goes, the therapists are still there, and still doing the things they learned to do. If that is so, we don’t know… But it’s certainly a good strategy, and much more effective than hiring special therapists to come in and treat people, and then they disappear at the end of the trial and you’ve changed nothing at all. But it’s not at all a foregone conclusion that the practice continues after the trial is over. And how to help that happen, I think, is the better question. What do we need to do to help therapists, both those who’ve been therapists in trials, and those who haven’t. How to help them to acquire this and to actually use it reliably in their own practice. And - feedback and supervision or coaching, I think, are two obvious elements that need to be there. (MI/MET elite).
The voices below reflect comments from clinical staff involved in the MTP study where supervision was centralized (Huber et al., 2000) and external to the clinics, and the MI/MET study where clinical supervision was decentralized and internal to the clinics.
What I think might have been a more effective way was, and I understand at a cost, to have the clinical supervisors in the site explaining the model through clinical supervision, that would have been satisfactory to Matrix, to make this more meaningful and have some other way of checking in on what’s happening there that, I think, could maybe on the sites have made it certainly, um, that the clinicians felt that they were truly being supervised and people understood (MTP Clinic Director).
I think it helps to have the supervisor on site, to be the same supervisor. Because there is a different approach… for instance, learning a new technique and being able to talk to a supervisor that I already know and say, “You know what, I’m feeling very scared,” or “I’m feeling like I’m goofing up” … There is that freedom already because we already developed some kind of relationship in the past, so it’s a person that I already know as opposed to - here comes this total stranger and I’m suppose to be reporting to him on doing this new thing that I’m not quite sure what it is. So I think it had a positive impact on already having this relationship with the supervisor so we could feel totally honest and say, “ You know what. I’m lost! Help me out.” While I think if I had a stranger as a supervisor, it would create a different impact, because I wouldn’t feel as comfortable saying, “I’m lost. I don’t know what I’m supposed to be doing.” (MI/MET Counselor).
The first step in the innovation-decision process is to have knowledge of the intervention (Rogers, 1995a, 2002). Knowledge about an interventions gained in the context of a clinical trial, however, is limited to selected staff who will be delivering the intervention. Randomizing entire clinics to intervention and control conditions normally requires study implementation in a large number of clinics and, in addition to challenges of cost and logistics, introduces the questions of variability between clinics. Because of this, a common strategy is to randomly assign clinicians within clinics to deliver either the experimental or control intervention. In small clinics, assignment of 3 to 4 counselors to the experimental intervention may mean that a large proportion of staff are trained on the new intervention. As seen in the comment below, however, the same number of counselors trained in a large clinic means that only a small proportion of staff are trained in the new intervention.
I think knowing that we participated in a large study – as you know, just a few counselors were actually studied: three, and we have 550 employees. So we’re talking hardly anybody was actually trained in MET through this study…We’re going to, hopefully, be training staff long before the results are published. (MI/MET Clinic Director).
The developers of the Matrix intervention have made significant and sustained training efforts over a number of years. However, the concept of bringing training back to staff in MTP participating sites, at some time after study completion, was not a part of the study protocol. In the MI/MET trial, the research protocol included such a plan to bring MI/MET training back to staff in participating sites after study completion.
And then also built in that, once the protocol was over, the expert trainer would then provide broader training to the rest of the clinic that did not get it in the first place, and to hopefully have that node, or that region, have sort of a relationship with the trainer…so you’d build that in locally. (MI/MET Elite).
MI/MET clinic staff were aware of the plan to bring training back to clinic settings, and were interested in such training, although training had not yet come back to MI/MET participating clinics at the time the interviews were conducted.
I know that they did promise us that they would come back and train us, and I think most of us are really looking forward to it. At least, I am, definitely. I would love to learn more about MET and how it works, and how does it work with somebody who has the disease of addiction. So, yeah, I would love to get trained with it. I feel like we’v e done all this labor work; give us something, please (MI/MET Counselor).
The strategies for adoption described so far reflect ways of planning for adoption in the context of a multi-site clinical trial. The feasibility of these strategies was demonstrated in the MI/MET study, where they were considered at the outset and implemented, at least partly, in the study itself. The strategy of giving study findings back to the clinic, early and when the clinic has the resources available to continue the intervention, confronts the research imperative that outcomes should not be released until all data have been collected. This imperative protects integrity of the research by preventing contamination or bias that could occur if outcomes from one clinic were known while data were still being collected in another clinic. This delay between the time the study is completed and the time results are reported however, usually measurable in years, also means that the adoption advantages created in participating clinics are eroded well before the effectiveness data are known. We characterized this time period with a code called “Waiting for Godot,” from the Beckett play where two characters wait endlessly for a third who never arrives. The frustration expressed by the clinical partner below reflects the length of time before study data became available, and also uncertainty about when those data would become available.
Uh, no, I don’t know the findings of the study! But, we are two days before we think we’re going to get some of the findings from the study. That’s another example of uncertainty and mistrust. Many of us believed that when it was done and [the data were] locked down, we would all have access to all of the data. Now it looks like that might not be so. There’s a certain amount of disillusionment and unhappiness about that… (MTP Site Evaluator).
The frustration that may be associated with waiting for study findings was less apparent in MI/MET interviews, where participating clinics were selected into the CTN partly because of their prior experience with research. One MI/MET counselor for example, when asked about the time it takes for study findings to become available to clinic participants commented simply “Sure. But isn’t that the nature of research?” Below, another CTN respondent describes the bureaucracy that slows communication of findings to participating clinics.
Well, I have the advantage, or disadvantage, of being trained in research and knowing that it takes forever to do anything … It’s hard to translate that to the counselors, though…they wanted the results a year ago, the day after they stopped randomizing, and that just doesn’t happen … and the only thing that frustrates me is all the politics involved with cleaning the data … and who’s got permission, who doesn’t have permission, and how much data cleaning there is … I know that it takes time to get stuff out. … I think that there’s …got to be some way to educate the staff about that. But also to provide them something else to make them feel good, as opposed to the results… And I don’t know what that is. (MI/MET Node Level Investigator).
Following the example of Fals-Stewart et al. (2004) in exploring adoption using a qualitative “autopsy” approach, in this section we consider two clinics where adoption occurred, and review the recommended strategies as well as other factors supporting adoption in those settings.
In the MTP study, planning for adoption was not part of the study design, and intervention training and supervision were centralized through the coordinating center. There was no effort to bring Matrix training back to participating clinics following the clinical trial, although numerous regional conferences concerning methamphetamine and featuring the Matrix intervention were produced, and these were hosted by clinics participating in the study. At the time of our interviews, study results had not been released and respondents were unaware of study findings. Adoption of the Matrix intervention occurred in one clinic (Guydish et al., 2005), where a Matrix trained counselor remained with the clinic in the post study period. This is consistent with the strategy of training senior clinical staff in the intervention, as the aim of this suggestion is to support retention of intervention-trained staff. Other features also may have supported adoption in this clinic. The clinic had opened a new program specifically to participate in the MTP trial, so that there was less commitment to another treatment approach within the clinic culture. Adoption of Matrix was also supported because community payors and referral sources, particularly those representing criminal justice, liked the Matrix intervention and wanted to see it continue.
Interviews with MI/MET protocol team leaders indicated that adoption in the post-study period was considered in the planning phase, especially in terms of using regional training and local supervision models. The study protocol also stated the intention to bring MI/MET training back to participating clinics in the post-study period, although this had not occurred at the time of our interviews. The MI/MET study did not emphasize training senior clinical staff to provide the intervention, and did not incorporate a plan to bring study findings back to clinics at an early stage. In one participating clinic where full adoption of the tested intervention occurred, the clinic had conducted staff training in MI prior to study participation, and may have self-selected into this protocol partly as a strategy to develop MI capabilities. One of the regional MI/MET trainers was located in the adopting clinic, and this senior staff member also served as the local supervisor for that clinic. In this adopting clinic, then, MI training and supervision expertise was available through existing staff. Last, the Single State Agency where this clinic was located was, concurrent with the clinical trial, formulating guidelines for evidence based practices in drug treatment settings, and these impending changes may have indirectly supported continued use of MI.
In the case of these adopting clinics the benefit of training senior clinical staff, with the goal of retaining intervention expertise in the clinic, may be observed in both settings. Planning for adoption, including regional training and local supervision models, supported adoption in the MI/MET clinic. These examples also suggest that factors external to the clinic may influence adoption, for example support of local referral sources in one case and state-level changes in another case. The strategies suggested in this paper to support adoption, specifically in the context of multi-site clinical trials, may not be necessary or sufficient in all cases, and their effect on adoption could be countered or obviated by other conditions. The suggested strategies may be additive, however, creating a common framework for supporting adoption in the context of multi-site clinical trials through specific and practical strategies.
The strategies to encourage adoption suggested in this paper are based on an attempt to integrate respondent comments and observations across two protocols, and across their respective organizational levels. Differences can be observed, however, by protocol and by organizational level. Respondents involved in planning the MTP study took a traditional effectiveness approach to adoption, with the expectation that the trial would determine effectiveness and dissemination and adoption would follow by other means. MI/MET respondents, by contrast, considered the adoption needs of participating clinics in the post-trial period, building into the protocol regional training, local supervision, and the intent to bring training back to participating clinics. Also differing between protocols was the level of concern or urgency about learning study results, as MTP evaluators and clinic directors showed greater concern about having these data come back to the clinic. This may be because MTP clinics wanted to apply for continuation funding and knew their applications would be strengthened by results applicable to their clinic. It may also be that MI/MET clinics, selected into the CTN partly for their prior research experience, better understood the usual time required for clinical trials to produce results.
Across both protocols, staff training and supervision were recurrent adoption needs reported by clinic directors and staff. Staff were uniformly interested in intervention training, seeing this as a way to increase and expand their clinical skill set. Clinic directors and clinical supervisors grappled more often with supervision, and considered the time and resources needed to supervise staff who were implementing new interventions. A broader perspective related to planning for adoption in the context of clinical trials came from (in the MI/MET trial) elite respondents. In some cases, where clinic directors were already experiencing pressure to adopt evidence based practices, or had considered increasing use of MI/MET in their clinic, participation in the trial may have been a strategy to build clinic capacity to implement these practices.
We studied adoption of research-based drug abuse treatment interventions in the context of multi-site clinical trials. We reasoned that adoption of new treatments seems most likely in clinics where the treatment was tested and, conversely, if an intervention is not adopted in clinics where it was tested, then its adoption in other settings seems less likely. We were interested in how the technology of multi-site clinical trials could better support adoption of tested intervention, and specifically in how the NIDA CTN could better meet its aim of supporting transfer of tested interventions into practice. Using qualitative methods we identified six strategies which, if incorporated into the development stage of clinical trials, may provide greater support for clinics in adopting the tested intervention once the trial ends. These strategies include planning for adoption, training senior staff to deliver the intervention, using regional training and local supervision models, and bringing intervention training and study findings back to the clinic once the trial has ended. Consideration of these strategies may also allow multi-site clinical trials to do double duty, first by testing effectiveness of interventions and second by promoting adoption of tested interventions.
A multi-site clinical trial, or an organization designed to conduct multiple clinical trials over time, can limit its goals to effectiveness testing. If the goals are defined to include support of adoption of tested interventions into practice, as in the case of the CTN, one strategy is to plan for adoption early in the process of protocol development. Some authors have noted that research planning should include consideration of the needs of decision makers (Tunis, Stryer & Clancy, 2003), and address issues relevant to clinical settings and concerns (Brown & Flynn, 2002; Morgenstern, 2000). Glasgow, Lichtenstein & Marcus (2003) go further, suggesting that effectiveness trials report the level of adoption of tested interventions in the post-study period, and that funders require study plans to include sustainability and implementation components designed to support tested interventions once the study is completed. The strategies to encourage adoption, derived from this study and reflecting observations of participants, offer guidance toward this goal.
Part of this planning may include training senior clinical staff to deliver the new intervention, rather than hiring new staff specifically for study purposes. Turnover among substance abuse counselors may be 25% to 50% in a given year (Gallon, Gabriel & Knudsen, 2003; McLellan & Meyers, 2004). At the same time, many clinics enjoy greater staff stability at senior levels. Training stable existing staff to deliver the new intervention offers a better chance of retaining intervention capability, accrued through intensive training and practice, within the clinic.
Multi-site clinical trials often use a centralized clinical training approach, with all training conducted in the same site and by the same trainers, and offering advantages of simplified logistic and increased control of the training process and curriculum. A decentralized training approach offers advantages for adoption however, as geographically nearby trainers can more easily offer ongoing training support or additional training in the post-study period. Not all interventions are supported, as is MI/MET, by a pre-existing and decentralized training infrastructure. When this does not exist, however, the protocol team may consider an approach that places a small number of trainers regionally, where they can provide support to study clinics. Doing so retains key training resources near the sites where future adoption may occur.
The advantages of centralized supervision are similar to those of centralized training: simplified logistics and increased scientific control. The advantage of local supervision, in terms of adoption, is retaining supervisory expertise in the clinical setting once the trial ends. There may be other advantages as well, in terms of what Rogers describes as adaptation or reinvention (Rogers, 1995b) and the impact of intervention “champions” and peer to peer communication (Rogers, 2002). Even in the context of rigorous effectiveness research, experimental interventions will be adapted to local conditions. Supervisors having local knowledge and influence may be more effective in negotiating the distance between the needs of the clinic and the needs of the study, and minimizing adaptation of core elements of the intervention. External supervisors may retain peer status with clinicians based on their clinical experience, but this peer status may be compromised if supervisors are seen as “external” experts.
By the end of the clinical trial, the participating clinic has made a large investment in learning and implementing the intervention. Bringing intervention training back into the clinic after the trial has ended, and extending the reach of training to as many staff as possible, capitalizes on the investment made by the clinic and offers additional support for adoption of the intervention in the post study period.
Perhaps the most challenging strategy offered here is that of bringing study findings back to the clinic setting early enough to inform adoption decisions. Others have observed the long lag time between development and application of innovative practices (Lenfant, 2003; Rogers, 2002), the Clinical Research Roundtable identified slow results as one factor impeding translation of research results into practice (Rosenberg, 2003; Sung et al., 2003), and shortening “research turn-around time” has been advocated as a strategy for improving drug abuse treatment (Turnure & Harrison, 1998). Boyer & Langbein (1991) noted that technology transfer is supported when research findings are available at the time that agencies engage in decision making, and this is embodied in the timeliness principle of the Change Book (Addiction Technology Transfer Centers, 2000). Although one solution will not fit all studies, some ways to bring study findings back to the participating clinics include the presentation of sample characteristics and study process measures, releasing outcome findings to individual clinics as they complete data collection, or releasing findings based on intermediate rather than final follow up points. From the viewpoint of adoption, a detailed dissemination plan included in the original study protocol, specifying what data will be released and when, is stronger than an implicit guideline that no findings are released until all data have been collected, analyzed, vetted, written, reviewed, and published.
The work reported here bears both similarities and differences with respect to the Fals-Stewart et al. (2004), who revisited 5 clinics to study adoption of Brief Couples Therapy (BCT) three to five years after conducting BCT trials in those settings. As their work concerned single site studies only, the organizational levels they describe did not include those associated with multi-site trials (e.g., steering committee or coordinating center representatives, or site-level principal investigators or evaluators). They did include patient-level investigation which, while it may be uniquely informative, was not included in the approach used in the current paper. Our study overlapped with Fals-Stewart et al. (2004) in terms of three organizational levels, reflecting input from counselors or clinical staff, supervisors, and administrators or clinic directors. At the counselor level, Fals-Stewart et al. (2004) observed that counselors were sometimes not trained in BCT or were altogether unaware of BCT, reflecting staff turnover as well as administrative commitment to continuing the BCT intervention. Our recommendations to use a regional training model and to bring training back to study sites may address the need for training staff as part of an adoption effort. At the supervisory level, they observe that BCT adoption occurred in one clinic where a counselor, who was trained in BCT in the course of the clinical trial, was later promoted to a supervisory position and encouraged use of BCT as a supervisor (Fals-Stewart et al., 2004). This observation would seem consistent with our recommendation to use a local supervision model, placing and retaining supervisors with experience in the intervention in the clinic where they can support counselors in implementing the intervention. Our recommendations speak only indirectly to the administration-level barriers discussed by Fals-Stewart et al. (2004), which focused on testing interventions that are reimbursable through clinic billing procedures, and which are supported by the clinic administration. One advantage of the CTN may be its investment in bidirectional communication, whereby some clinic directors are always involved in determining interventions to be tested.
This work is bounded by several limitations. The strategies for adoption suggested here are derived from two case studies. They represent a rational plan for enhancing intervention adoption in the context of multi-site clinical trials, but have not been empirically tested. For the CTN MI/MET clinical trial we studied adoption in 5 of the 11 participating clinics. Including all 11 MI/MET clinics in the study, along with 8 MTP clinics, would increase study costs without any known or expected added benefit. The 5 MI/MET clinics studied were selected because they were located on the West Coast, so that visiting sites to conduct in-person interviews was more cost-efficient. It is possible that MI/MET adoption, or barriers to adoption or recommendations to support adoption, could differ between the sites that were studied and those that were not. As we did not study the 6 remaining sites, however, we have no data to speak to this question. Last, we studied psychosocial interventions only, and some of the strategies suggested may not apply to clinical trials of pharmacological interventions. Studying the effect of the proposed strategies on adoption, if any, is an area for further research.
Traditional outcomes research directs that intervention effectiveness should be established before dissemination and adoption efforts begin. The strategies suggested in this paper, conversely, consider adoption early in the protocol planning process and through the end of the study. Interventions tested in multi-site clinical trials are selected based on the presence of compelling prior efficacy research. Within the CTN, interventions are also selected in partnership with clinical treatment programs. If the clinical trial should show effectiveness of the promising intervention, the treatment field immediately faces the challenge of dissemination and adoption. The proposed strategies consider the adoption challenge in early study planning, and offer supports to programs interested in adoption during the post-study phase, so that these programs can become ambassadors for dissemination once study results are known. These strategies may also offer a way for the CTN to address its mission of technology transfer, and to expand the use of research-based practices in drug abuse treatment.
This work was supported by National Institute on Drug Abuse (R01 DA-14470), by the California-Arizona research node of the NIDA Clinical Trials Network (U10 DA-105815), and by the NIDA San Francisco Treatment Research Center (P50 DA-09253).