|Home | About | Journals | Submit | Contact Us | Français|
Research on substance use disorders has produced a slew of disappointments in studies designed to confirm basic principles of the technology approach to treatment dissemination. These setbacks should inspire addictions science to pursue complementary paths of inquiry that focus on evidence-based practices delivered under naturalistic conditions. This will require larger accommodations to, and closer partnerships with, the indigenous cultures of everyday care.
Carey, Henson, Carey, and Maisto (2010) examine mechanisms of change in a brief motivational intervention for college-age alcohol users by analyzing mediational effects in two key domains: drinking motivation and drinking norms. The study contains most of the prized features of rigorous mechanisms research: an empirically based treatment (EBT), significant outcome effects in the study sample, mediator constructs with strong conceptual and procedural links to the centerpiece EBT, multidimensional operationalization of study variables, multiple measurement points for mediators and outcomes, and state-of-the-science longitudinal analytic techniques. The study also contains the all-too-common result for mechanisms studies involving substance use treatments: disappointing effects for the presumptive mediators (in this case, readiness to change and perceived costs/benefits of drinking).
Addictions research has witnessed a slew of disappointments in small- and large-scale studies designed to confirm basic principles of the technology approach to treatment dissemination. The technology approach attempts to articulate EBT implementation in precise fashion so that the curative elements of a given model can be specified, evaluated, and replicated (see Carroll & Nuro, 2002; Rounsaville, Carroll, & Onken, 2001). This includes standardizing the model in a treatment manual; identifying the population for whom treatment is intended; documenting procedures for selecting, training, and supervising practitioners; and monitoring implementation with valid fidelity tools. Randomized controlled trials, client-treatment matching studies, and mechanisms of change analyses are the research designs of choice for verifying and explaining treatment effects. The exacting standards of the technology approach are intended to promote the feasibility and strength of EBTs when delivered in various clinical settings.
The technology approach has spearheaded enormous gains in laboratory-based research on EBT implementation (via efficacy trials) and has started to make headway in real-world settings (via effectiveness studies). However, several noteworthy glitches in the technology approach have persistently occurred in addictions treatment. Because many of these glitches have arisen in high-profile, well-controlled studies, they are not easily dismissed as random or marginal. Three glitches seem particularly troublesome.
As Morgenstern and McKay (2007) point out, a cornerstone premise of the technology approach is the specificity hypothesis: EBTs produce impacts largely due to the curative effects of model-specific, theory-based techniques that differ from common elements or placebo effects of psychotherapy. The specificity hypothesis is the underlying premise of both mediational research and client-treatment matching studies. In the substance use field, tests of mediation have yielded positive and negative results in equal abundance (Morgenstern & McKay, 2007), and matching studies have failed across the board to confirm hypotheses about which treatments are best suited for which clients (Carroll & Rounsaville, 2007; Morgenstern & McKay, 2007). Even surefire hypotheses regarding the benefits of strong fidelity to core EBT ingredients are frequently upended. Recent studies of both adolescent (Hogue et al., 2008) and adult substance users (Barber et al., 2006) have reported a curvilinear relation between treatment adherence and some client outcomes: Too much adherence (as well as too little) can be a bad thing. And therapist competence in delivering EBTs is often weakly related or unrelated to outcome—when competence can be reliably assessed and differentiated from adherence, a difficult trick to master (Barber, Sharpless, Klostermann, & McCarthy, 2007).
The task of transporting EBTs to everyday settings—known as technology transfer—has proven formidable. A host of factors influences the amenability of community agencies to adopting EBTs, including strength of partnership between EBT developers and providers, belief by clinical personnel in the value of integrating EBTs into existing services, and suitability of agency resources (including personnel) and organizational context for implementing EBTs with fidelity (Simpson, 2002). It appears that technology transfer can work under conditions of extensive support from model developers utilizing quality assurance (QA) “superstructures” to cultivate training, implementation, and monitoring activities on-site. QA superstructures invariably contain four components (see Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005): (a) guidelines for selecting adoption-ready sites and identifying qualified staff for training; (b) standardized training toolkits that include a treatment manual, protocol for training workshops, demonstration videos and clinician workbooks, on-site supervision procedures, and fidelity checklists; (c) procedures for ongoing training and consultation from model experts that include observational coaching of clinic cases; and (d) continuous quality improvement procedures to evaluate implementation data, feed selected data back to therapists, and buttress organizational support. However, commitment to QA superstructures demands substantial and costly changes in agency infrastructure, administrative and clinical supervision, material resources, and ongoing technical support. And as yet there is no evidence that EBTs can be sustained in usual care after external support ends, or what level of partnership is needed to maintain a “good enough” QA structure indefinitely.
Perhaps the most surprising technology glitch has been the strong showing of treatment as usual (TAU) conditions in EBT effectiveness research. Available drug treatment services have repeatedly produced outcomes on par with competing EBTs transported with great care into existing agencies (e.g., Miller, Yahne, & Tonigan, 2003; Morgenstern, Blanchard, Morgan, Labouvie, & Hiyaki, 2001; Westerberg, Miller, & Tonigan, 2000). Also, a manualized version of drug counseling (aka 12 Step Model), the most widely practiced approach in the substance abuse treatment system, has matched or exceeded various EBTs in two multi-site controlled trials (Crits-Christoph et al., 1999; Project MATCH Research Group, 1997), such that 12 Step merits serious consideration as an EBT itself. And recent efforts by the Clinical Trials Network of the National Institute on Drug Abuse to test the effectiveness of motivational interviewing (MI) in usual care for adult substance users have logged modest EBT victories or split decisions versus TAU: MI delivered during an initial evaluation session produced better early retention in treatment but no superiority in one- or three-month outcomes (Carroll et al., 2006); three-session MI was superior in three-month outcomes for primary alcohol use but not primary drug use (with findings further complicated by site effects), and there were no differences in retention (Ball et al., 2007); and retention and outcome effects for MI were virtually indistinguishable from TAU in a Hispanic sample (Carroll et al., 2009).
Acknowledging that technology glitches are persistent and problematic is not tantamount to disparaging the technology approach. There is little reason to doubt that ongoing advances in mechanisms of change research will yield better understanding of how and why treatments work—and so forth for other technology facets. For example, germane to Carey et al. (2010), MI has a strong record of success in mediational studies, and results from these and also from therapist training studies have been translated into meaningful improvements in the theory and practice of MI (Miller & Rose, 2009). The mixed findings by Carey et al. (2010) do not detract from this body of work so much as challenge the architects of MI (and its offshoots) to understand them in context and upgrade model development as needed.
Nevertheless, the technology setbacks encountered for all varieties of EBTs should inspire addictions science to pursue complementary paths of inquiry that are considerably less traveled but potentially as rewarding. These paths would converge in focusing on evidence-based practices (EBPs) delivered under naturalistic conditions. As described below, moving efficiently from lab-developed EBTs to practice-friendly EBPs may require larger accommodations to, and closer partnerships with, the indigenous cultures of everyday care (see also Southam-Gerow, 2004, for a similar perspective regarding mental health treatment).
Concerns about the feasibility of EBT technology transfer have led researchers in both mental health and substance use to advocate a “core elements” approach to increasing use of EBPs in agency settings. The core elements approach emphasizes dissemination of reduced sets of essential treatment techniques common across EBT models for similar populations. The benefits of shifting away from wholesale name-brand EBTs toward distilled core techniques could be profound (Chorpita, Daleiden, & Weisz, 2005): unify and simplify the task of transporting curative ingredients of EBTs into routine care with fidelity; forego the herculean demand to master a different treatment manual for each clinical disorder; retain the importance of provider judgment about duration, intensity, and sequencing of EBPs; and provide evidence-based options for client groups with diagnostic complexity and/or for whom no manualized EBT exists. Disseminating EBT core elements could also enhance dissemination of full-scale EBTs by augmenting the basic technical competencies of community practitioners and galvanizing the process of adapting discrete manuals to fit usual care (Chorpita & Daleiden, 2009). Core techniques might also be deployed as a first-line option in primary behavioral care, with nonresponders referred to more comprehensive, EBT-based specialty care (Carroll & Rounsaville, 2006). Note that the core elements approach subscribes (at least in principle) to the specificity hypothesis, and progress in generating an evidence base for core techniques in various client populations appears linked to progress in mechanisms of change research.
As mentioned above, QA superstructures that accompany complex EBTs may ultimately prove unsustainable in many clinical settings. It seems likely that model-specific algorithms for reducing, modifying, and reinventing EBTs will become de rigueur to supplement existing QA protocols (Garner, 2009). Also, interactive computer-based training and distance learning methods hold great promise for increasing the accessibility and perhaps precision of technology transfer (Weingardt, 2004). On the other hand, QA superstructures may thrive in large government-operated sectors of care where substance use is prevalent: criminal justice, juvenile justice, welfare, child welfare, even schools. Each sector presents a unique service context—and dissemination opportunity—with regard to resource availability, organizational capacities, and barriers to effective treatment implementation (Institute of Medicine, 2006). Because government is often the sole funder of services and a primary stakeholder in the accountability and quality of those services, strong research-government partnership increases the likelihood that EBTs will take root and be sustained at a systems level (Morgenstern, Hogue, Dauber, Dasaro, & McKay, 2009).
Technology-driven effectiveness studies exert multifaceted top-down influence over treatment implementation. Little is known about whether EBTs can be delivered with fidelity in unadulterated field settings, that is, without significantly changing the working conditions of line therapists. To remedy this, naturalistic studies featuring observational data collection are needed to investigate what interventions work (if any), and how they work, in community agencies, including whether (some) agencies already incorporate EBPs in everyday practice (Weingardt & Gifford, 2007). Does TAU produce good outcomes (or not) via adapted versions of EBTs? Nonspecific “common factors” such as therapeutic alliance? Placebo effects coupled with client self-change processes? Some combination? It is possible that TAU is predominantly bereft of EBPs (e.g., Santa Ana et al., 2008), but that remains to be seen. Another underutilized strategy to increase knowledge about success and failure in usual care is patient-focused research, which includes examination of client self-change, assessment of treatment responsiveness on a session-by-session basis, and feedback of responsiveness data to therapists that permits midstream adjustments (Morgen-stern & McKay, 2007; Orford, 2008). In the long run, generating evidence that EBPs are feasible and cost-effective in pure field conditions may be the best approach to encourage mainstream treatment agencies to adopt and support EBTs.
Will the hoped-for transition from lab-based EBTs to real-world EBPs lead to improvements in standard care outcomes? Lamentably few studies in the addictions field address this question (Carroll & Rounsaville, 2007). The technology approach has admirably led the charge in EBT dissemination research, but shortcomings are apparent, and a diversification in approach seems sensible. Organic alternatives cannot replace technology-driven methods, nor will they solve or prevent future technology glitches. However, they may well increase the accuracy and efficiency with which we determine how and where to plant EBTs for sustainable yield.
Preparation of this article was supported by grant R01 DA019607 from the National Institute on Drug Abuse.