Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Drug Alcohol Depend. Author manuscript; available in PMC 2007 July 17.
Published in final edited form as:
PMCID: PMC1922034

Developing Adaptive Treatment Strategies In Substance Abuse Research


For many individuals, substance abuse possesses characteristics of chronic disorders in that individuals experience repeated cycles of cessation and relapse; hence viewing drug dependence as a chronic, relapsing disorder is increasingly accepted. The development of a treatment for a chronic disorder requires consideration of the ordering of treatments, the timing of changes in treatment, and the use of measures of response, burden and adherence collected during treatment to make further treatment decisions. Adaptive treatment strategies provide a vehicle through which these issues can be addressed and thus provide a means toward improving and informing the clinical management of chronic substance abuse disorders. The sequential multiple assignment randomized trial (SMART) is particularly useful in developing adaptive treatment strategies. Simple analyses that can be used with the SMART design are described. Furthermore the SMART design is compared with standard experimental designs.

Keywords: Stepped Care, Individualized Care, Treatment Algorithms, Dynamic Treatment Regimes, Experimental Design

1. Introduction

For many individuals, substance abuse possesses characteristics of chronic disorders in that individuals experience repeated cycles of cessation and relapse (Hser et al., 1997); hence viewing drug dependence as an chronic, relapsing disorder is increasingly accepted by researchers (Donovan, 1986; Brown, 1998; O'Brien and McLellan, 1996; McLellan et al., 2000; McLellan, 2002). Recently, substance abuse researchers have proposed strategies composed of adaptive sequences of therapies, to more effectively manage the variable course of this disorder as well as the heterogeneity of response to standard interventions. In some cases these strategies incorporate cost and burden considerations such as the stepped care models (Sobel and Sobel, 2000; Breslin et al., 1999), which advocate beginning with a minimally intensive but effective therapy, transitioning to more intensive or other types of therapy only if indicated. Other related strategies involve increasing both intensity of therapy and encouragement to adhere when indicated (Brooner and Kidorf, 2002; Kidorf et al., 2004). Further strategies are designed to deal with acute problems when they arise, and to return to maintenance therapy once acute problems are resolved (McKay et al., 2004).

We group these treatment/management strategies under the umbrella term “adaptive treatment strategies” (Lavori et al., 2000; Collins et al., 2004). Adaptive treatment strategies appear in a variety of health related areas; for example, they are employed in the treatment of depression (Lavori et al., 2000; Rush et al., 2003; called treatment algorithms) and in acute HIV infection (Altfield and Walker, 2001; called structured treatment interruptions or Albert and Yun, 2001, called treatment strategies). In all cases, these strategies individualize treatment via decision rules that recommend when and how the treatment should change. The recommendations are based on patient characteristics and outcomes collected during treatment such as patient response and adherence. Adaptive treatment strategies operationalize the clinical practice of adapting and re-evaluating treatment options based on patient progress thereby facilitating systematic study and refinement.

In developing an adaptive treatment strategy, questions that often need to be addressed include the best sequencing of treatments when individuals are not responding and the best timing of transition from more intensive therapies to less intensive therapies or maintenance therapy and vice versa. Because comorbidities such as homelessness, depression, and HIV infection are not uncommon, questions also arise concerning the sequencing or concurrent use of adjunctive therapies targeted at comorbid disorders. We argue that these questions are best addressed using randomization and furthermore since these questions are sequential in nature, the randomizations should also be sequential. We propose an experimental randomized trial design, the sequential multiple assignment randomized trial (SMART), that is suited for developing new or refining existing adaptive treatment strategies. After an adaptive treatment strategy is developed/refined, the strategy can be evaluated against an appropriate alternative such as standard care in a two group randomized trial.

In Section 2, we review adaptive treatment strategies and provide a concrete example. Section 3 introduces SMART, provides examples, and gives an intuitive discussion of how questions such as those listed above can be addressed using data from a SMART design. Section 4 provides, for the first time, a discussion of the advantages and disadvantages of SMART relative to two common alternative experimental designs. And in Section 5, we provide new, yet straightforward, methods for data analyses. It should be noted that although the focus of this paper is on the development of adaptive interventions for the treatment of addictions, much of the discussion applies to treatments for other chronic disorders.

2. Adaptive Treatment Strategies

Adaptive treatment strategies individualize therapy via decision rules that specify how the intensity or type of treatment should change depending on patient variables. These patient variables can include patient characteristics such as family history, versions of the outcome measure assessed during treatment (e.g., substance use), or other variables thought to predict the ultimate outcome (e.g., self-efficacy, counseling attendance and so forth). Here is a simple example.

EXAMPLE 1: Alcohol-dependent patients are treated with an opiate antagonist Naltrexone (NTX) and medical management (MM; Miller, 2004). If the patient is able to avoid more than one heavy drinking day during the ensuing two months then the patient is provided a prescription to NTX and Telephone Disease Management (TDM; Oslin, et al., 2003). If, however, at anytime during the two months the patient incurs a second heavy drinking day, then the conclusion is that the patient’s disorder is not responding to NTX. In this case, if the patient is experiencing minimal side effects to NTX, then the patient is provided NTX + Combined Behavioral Intervention (CBI; Pettinati, et al., 2004). Conversely, if the nonresponding patient is experiencing moderate or severe side effects, the patient is provided CBI alone (i.e., NTX is discontinued).

In the above example, the decision rules adapt the treatment to the patient using heavy drinking days and side effects. Further discussion of adaptive treatment strategies can be found in Collins et al. (2004).

3. Sequential Multiple Assignment Randomization Trials (SMART)

Sequential multiple assignment randomized trials (SMART) (Lavori and Dawson, 2000, 2003; Ten Have et al., 2003; Murphy, 2005) are intended to be used in the building and refinement of adaptive treatment strategies. In these trials, subjects are randomized multiple times. A number of SMART trials have been, or are being, conducted. These include the CATIE trial for antipsychotic medications in patients with Alzheimer's (Schneider et al., 2001), STAR*D for treatment of depression (Rush et al., 2003; Lavori et al., 2001) and phase II trials at MD Anderson for treating cancer (Thall et al., 2000).

In order to make the discussion of the SMART trial design concrete, consider the following simple example modeled after a SMART trial, which includes a number of the interventions described in Example 1 above, and which is currently underway by one of us (Oslin). This trial was designed to address two questions related to the development of a strategy for treating alcohol dependent patients with the opiate antagonist Naltrexone (NTX). First, there is a variety of potential timing definitions concerning when a patient should be considered a NTX non-responder. Second, once a subject either responds to NTX (or does not respond), a variety of subsequent treatments are possible. The goal is to minimize the number of heavy drinking days over the 12 month study period.

EXAMPLE 2: Each subject is randomized twice, first to a definition of nonresponse/response (first decision point) and second to a subsequent treatment (second decision point). The two possible definitions of nonresponse are: nonresponse if 2 or more heavy drinking days within a two month period or nonresponse if 5 or more heavy drinking days within a two month period. As soon as a subject meets the definition of nonresponse, the subject is randomized to either NTX + CBI or to CBI alone. If the subject does not meet his/her assigned definition of nonresponse in the two month interval (a responder) then the subject is rerandomized to either a 6 month prescription of NTX or a 6 month prescription of NTX + TDM.

In a SMART design, subjects are randomized at selected critical decision points. In example 2, the critical decisions are first the timing of alterations in therapy and second the choice of the subsequent therapy for responders/nonresponders. Thus each subject is randomized twice, initially and then again once it is known if the subject is a responder or a nonresponder to initial therapy. Frequently and indeed in the example discussed here, the multiple randomizations can be performed prior to initial treatment; see Section 5 for discussion. Note that even though the SMART experimental design involves randomization, once an adaptive treatment strategy has been developed, the delivery of the treatment strategy (e.g. contrast the strategy in Example 1 with the SMART trial in Example 2) does not involve randomization.

Just as with any randomized trial, it is best to power SMART to address only a few primary strategy-building hypotheses so as to minimize sample size requirements. Potential primary hypotheses might address, “Is it useful to continue providing NTX to nonresponding patients in addition to CBI as compared to only providing CBI?” or “Do patients assigned a more stringent definition of nonresponse (2 or more heavy drinking days) do better overall than patients assigned a more lax definition of nonresponse (5 or more heavy drinking days)?” Both of these questions are addressed via two group comparisons (in the latter each group contains ½ of the total study sample; in the former each group contains ½ of the patients in the nonresponding group).

Secondary analyses may test for interactions of clinical and theoretical interest such as “Does the comparison between CBI and NTX+CBI for nonresponders change depending on the definition of nonresponse to initial NTX?” If there are significant interactions between patient variables and treatments then more complex adaptive treatment strategies are indicated. For example, patients who experience certain side effects while receiving NTX may respond differentially to NTX+CBI relative to CBI. A sufficiently strong interaction would indicate that these side effects should be used in deciding which secondary treatment is best for which nonresponder.

Following the analysis of the data from a SMART design, one may choose to proceed directly to a two group confirmatory trial in which the developed strategy is compared to an appropriate alternative or one may decide that additional randomized trials are needed. Consider example 2 once more. Suppose the following tests were planned: (1) test if NTX alone versus TDM+NTX produces differing outcomes among responders to the initial NTX, (2) test if among nonresponders to initial NTX, CBI alone produces different outcomes than CBI+NTX and (3) test if patients assigned a more stringent definition of nonresponse (2 or more heavy drinking days) do better overall than patients assigned a more lax definition of nonresponse (5 or more heavy drinking days) and (4) test for the following interaction: “Does overall level of side effects to NTX differentiate between nonresponders who do better on CBI alone and nonresponders who do better on CBI+NTX?” We conduct the study and find the following (suppose the primary outcome is the number of heavy drinking days).

EXAMPLE 2 CONTINUED: There is no difference between providing NTX or TDM + NTX for responders and, on average, better outcomes result if we provide CBI+ NTX rather than CBI alone to nonresponders. However we find that nonresponding individuals who are experiencing substantial side effects to NTX do better if switched to CBI alone as opposed to continuing with NTX and adding CBI. Also altering treatment earlier is associated with fewer heavy drinking days than altering treatment later.

Given these results, we can choose to preclude further refinement of the strategy and proceed directly to a randomized trial evaluation of the adaptive treatment strategy: “Patients begin on NTX. If a patient experiences 2 or more heavy drinking days within two months then the patient is provided CBI + NTX unless the patient experienced substantial side effects, in which case the patient is provided CBI. On the other hand, if the subject has at most 1 heavy drinking day within two months, then the subject is provided an NTX prescription.”

Alternatively suppose an additional post-hoc analysis yields a strong, unanticipated interaction: among responders to the initial NTX, subjects with poor social support do better on TDM + NTX prescription than these subjects do with NTX prescription alone. Thus a refining trial might seek to replicate this result and/or may add a component to improve social support so that more subjects could be managed effectively with the lower cost intervention (i.e., NTX only). This second trial might only study responders (those who experience one or fewer heavy drinking days in the first two months on NTX).

4. Disadvantages and Advantages of the SMART Design Relative to Other Experimental Approaches

In this section, the SMART design is contrasted with two other plausible experimental approaches; in each case the goal is to develop and/or refine adaptive treatment strategies. To make the discussions concrete again consider Example 2 which outlines a SMART study trial design.

Alternative Experimental Approach 1)

An alternative to SMART is to use results from historical trials and the available literature to pinpoint when one should give up on NTX and move to a second treatment. Suppose this review indicates that once 4 heavy drinking days has occurred the chance of subsequent response to NTX is very low. Thus a natural approach is to dispense with the randomization to different timings (e.g. in Example 2, these are, “alter treatment as soon as 2 heavy drinking days occur” versus “alter treatment as soon as 5 heavy drinking days occur”) and plan to alter treatment as soon as 4 heavy drinking days occur. Next two, two-group randomized trials, one for the responders (NTX versus TDM+NTX) and one for nonresponders (CBI versus CBI+NTX) are conducted.

Two advantages of SMART relative to Experimental Approach 1 are 1) the avoidance of deleterious cohort effects and 2) the ability to detect a synergistic effect in a sequence of treatments. First consider the avoidance of deleterious cohort effects. Subjects who are enrolled in and who remained in the historical trials which pointed toward adopting the 4 days of heavy drinking cutoff may be quite different from the subjects who enroll in and who remain in a SMART trial. To see this consider the issue of adherence/attrition; in many historical trials subjects were assigned a fixed treatment, that is, there were no options besides non-adherence or study attrition for subjects who were not improving. The lack of options for non-improving subjects plus the fact that often the subjects do not know if they are receiving treatment (double-blind studies) may lead to attrition or non-adherence (Chick et al., 2000; Pettinati et al., 2000). As a result the subjects who remained in the historical trial may be quite different from the subjects that remain in a SMART trial, which provides alternates for non-improving subjects. In addition subjects in historical trials may differ from current subjects in unknown ways (Byar, 1980; Green and Byar, 1984; Byar, 1988; Byar 1991). The result is that the choice of 4 heavy drinking days may only be suitable for people who are most likely to stay in or adhere in the historical trials and may be inappropriate or deleterious for people who would enroll in a SMART study.

Second, data from a SMART study permit an investigation of synergistic effects in a sequence of treatments. By synergistic effects we mean that the longer term effect of the first treatment is magnified by subsequent treatments or that an initial treatment enables the patient to benefit more substantially from subsequent treatments. The historical trials are likely to have been conducted in a setting in which the subjects did not receive a subsequent treatment or if they did, the subsequent treatments differed from those considered here. As a result synergistic effects will be missed. For example, the longer term evaluation of an initial treatment is, in an historical trial, actually an evaluation of the initial treatment followed by a mixture of subsequent treatments as available in the historical trials. This evaluation may say very little about the usefulness of the initial treatment when that treatment is followed by the specific subsequent treatments available in our study. With these advantages, a SMART design has the potential to detect the synergistic benefit of the sequence of treatments whereas Experimental Approach 1 does not. However, the fact that the SMART trial is new to clinical science is a disadvantage.

Alternative Experimental Approach 2)

A second alternative experimental design to SMART is to survey the available literature, use clinical experience and employ a variety of established principles (e.g. start patients on the least intensive treatment as in the stepped care model (Sobel and Sobell, 2000) or use the principles espoused by Collins, et al., (2004)) to completely formulate the decision rules underlying the adaptive treatment strategy. Further guidance might be obtained from existing clinical practice guidelines, such as those of ASAM (Mee-Lee et al., 1996). Once the decision rules and associated therapies are selected, a randomized confirmatory trial with one group assigned the formulated adaptive treatment strategy and the other group assigned the best alternative treatment available (or other appropriate alternative) is conducted.

The advantages of the SMART trial over Experimental Approach 2 are related to the ability of the clinical scientist to open the “black box” in a SMART trial. First, using data from a SMART trial one can test for evidence regarding whether the information concerning (e.g. adherence to initial treatment) nonresponding patients should influence the selection of subsequent treatment; if there is no evidence of a need to treat patients differently then the developed decision rules will be simpler and potentially easier to implement. For example, using data from a SMART trial, we can test if adherence during initial treatment is useful in deciding which subsequent treatment to provide nonresponders; the absence of randomization to different subsequent treatments in Experimental Approach 2 precludes this experimental comparison.

Second, as discussed above, the use of the SMART design permits an investigation of synergistic effects and thus improves the chance that the developed adaptive treatment strategy includes treatment sequences and timing decisions that are synergistic and not antagonistic. For example, it may be crucial to match the timing definition of responder/nonresponder with the selected subsequent treatments to ensure, for example, that a responder according to our definition is ready for the selected maintenance treatment and we have given the initial treatment adequate time to work. Also it is important that we detect and avoid unintentional negative interactions; for example, the burden imposed by the initial treatment may reduce adherence to the subsequent treatment.

Third, by proactively and experimentally investigating which of the treatment sequences and timing decisions are important, which treatment decisions interact and which treatment decisions interact with patient characteristics and outcomes, we can increase the chance that the developed adaptive treatment strategy will exhibit improved performance when evaluated in a future randomized trial. In contrast when a strategy developed as described in Experimental Approach 2 (using expert clinical judgment or existing treatment recommendations) does perform as well as expected, analyses to ascertain which components are useful must be nonexperimental (e.g. analyses restricted only to patients who exhibited strong adherence to assigned treatment or dose-response analyses where the dose is the level of adherence). Due to the multiple randomizations, the SMART approach provides a greater range of experimental information concerning the components of the strategy over the former approach and thus when investigators must to go “back to the drawing board,” to redesign the strategy the chance of a successful redesign is higher.

A common misconception is that the SMART trial will have lower power than Experimental Approach 2. However the SMART trial in example 2 is a “full factorial” trial. This means that to test, for example, the main effect of a stringent definition of response versus a less stringent definition of response in example 2, two groups of patients, each ½ the sample size is used. Also to test the main effect of CBI vs CBI+NTX in example 2, data from all nonresponding patients is used with each group composed of approximately ½ of the nonresponding patients. It is true that in example 2 only ¼ of the study subjects follow a particular adaptive treatment strategy; thus a comparison of two strategies involves only ½ of the subjects. This is due to the fact that the goal in using a SMART trial is to inform the construction/refinement of a strategy as opposed to evaluating a fully constructed strategy. It is also true that due to the multiple treatment options at different points in time, conducting a SMART trial is likely to be a more complex endeavor than conducting a confirmatory randomized trial.

The SMART and Experimental Approaches 1) and 2), provide data that can be analyzed in post-hoc analyses. These post-hoc analyses include dose-response analyses in which the dose is the amount of treatment received (for example the number of counseling sessions attended). These analyses may lend credibility to one or more explanations for the observed effect sizes from the primary analyses. Note however that these, nonexperimental, post hoc analyses do not depend on randomization and hence are subject to bias or confounding (Green and Byar, 1984).

Moreover, all of the experimental approaches (including SMART) can be followed by further randomized trials. Indeed, since the SMART trial is a trial intended to develop and/or refine an adaptive treatment strategy, it should eventually be followed by the standard randomized trial, with one group assigned the optimized adaptive treatment strategy and the other group assigned an appropriate. Finally, all of these experimental approaches are subject to each of the difficulties inherent in the conduct of a randomized trial (subjects withdraw consent, incomplete assessments, implementation issues etc.).

5. Further Statistical Considerations

Design Considerations

As mentioned above often all of the randomizations can be conducted prior to the onset of treatment. Consider Example 2; here three randomizations can be conducted prior to beginning the trial. The first randomization is to one of the two timings (alter treatment as soon as 2 heavy drinking days versus alter treatment as soon as 5 heavy drinking days occurs). The second randomization is to NTX + TDM versus NTX which is used if the subject responds to initial treatment. The third randomization, NTX+CBI versus CBI, is used if the subject does not respond to initial treatment. This approach is equivalent to randomizing each subject to one of the eight possible treatment strategies.

Analysis Considerations

Again consider Example 2, where the outcome might be percent heavy drinking days. As discussed many analyses are straightforward two group comparisons. See Murphy (2005) for further discussion. Other analyses are standard as well. For example, to test for interactions between the timing of treatment alteration (or patient variables) and subsequent treatment, one may use an analysis of covariance or regression including covariates such as patient variables, responder status and indicators of both the timing of treatment alteration and second treatments. An obvious and important question in Example 2 is whether the level of side effects to the initial provision of NTX differentiates between nonresponders who do better on CBI alone as compared to nonresponders who do better on CBI + NTX. To address this question, an interaction between side effects (experienced during initial treatment) and an indicator of the second treatment can be added to the above regression.

There is one important difference between the interpretation of these regressions and the standard regression with which most researchers are familiar. This difference lies in the interpretation of the regression coefficient of the first decision (here timing of treatment alteration); this regression coefficient should not be used to assess the significance of the timing decision on the outcome for the following reasons. First this particular regression coefficient partially represents a spurious correlation between the timing decision and the outcome (this spurious correlation occurs because responder status is an outcome of the timing decision and responder status is included as a covariate in the regression). Second the regression coefficient of the timing decision does not include the effect of this timing decision on the drinking outcome that is mediated by response to initial NTX. This is one of the reasons why clinical trialists avoid including post treatment outcomes (here responder status) as covariates when assessing the effect of the initial treatment (Rochon, 1999; ICH, 1999). Instead, to evaluate the effect of the timing decision a two group comparison (between patients in the SMART trial assigned to the two different timing criteria) can be conducted. Alternately a regression including pretreatment patient characteristics and an indicator of the timing decision as covariates is also useful, particularly so if interactions between pretreatment patient characteristics and the timing decision are of interest. However, for the reasons explained above, in general, patient outcomes such as side effect level and adherence to initial treatment would not be included in this latter regression.

New methods with an emphasis on improving power are in development (Murphy, 2003; Robins, 2004; Pineau et al., 2006) along with methods that permit a greater variety of outcome variables (Thall et al., 2000) or time-to-an-event outcomes (Lunceford et al., 2002). See Murphy et al. (2006) for further discussion of these issues. Many of these methods generalize well to the analysis of trials with more than two randomizations per subject (e.g. as in STAR*D in which up to 4 randomizations occur).

6. Discussion

The SMART trial is well suited for addressing the sequencing and timing questions that arise in the development of adaptive treatment strategies in the management of drug dependence. However further experimental design work is required. This is because most adaptive treatment strategies are multi-component treatments: potential components include treatment for the target disorder, adjunctive treatments and interventions to improve adherence (e.g. motivational interviewing, incentives). Moreover, each component may be delivered through several possible formats (e.g, group therapy versus individual therapy, telephone vs. in-person, and so forth). Construction and refinement of adaptive treatment strategies is a challenging process, given that therapeutic components, sequencing and timing decisions must all be taken into consideration. One way to address this challenge is to imbed the SMART experimental design into the MOST paradigm (Collins et al., 2005). The MOST paradigm advocates using a series of experimental trials to sift through the variety of components so as to prospectively determine which components are active prior to evaluating the multi-component treatment. However at this time it is unclear how the SMART experimental design might be best embedded in MOST; this is an area for future research.


We acknowledge support from NIH grants: R21 DA019800, K02 DA15674, P50 DA10075 (Murphy); DA-005186 (Lynch); K08 MH01599, 5P30MH52129; NIAAA 1R01AA014851, and the Department of Veterans Affairs MERP Award (Oslin); K02-DA00361, R01 AA14850 (McKay); R01 MH61892, P30 MH066270 (Ten Have).


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Susan A. Murphy, University of Michigan, Institute for Social Research, 426 Thompson St., Ann Arbor, MI 48106–1248.

Kevin G. Lynch, University of Pennsylvania, 3900 Chestnut Street, Philadelphia, PA 19104–6178.

David Oslin, University of Pennsylvania, Geriatric and Addiction Psychiatry, 3535 Market Street, Room 3002, Philadelphia, PA 19104.

James R. McKay, University of Pennsylvania, 3900 Chestnut St., Philadelphia, PA 19104–6178.

Tom TenHave, University of Pennsylvania, School of Medicine, 607 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104–6021.


  • Albert JM, Yun H. Statistical advances in AIDS therapy trials. Stat Methods Med Res. 2001;10:85–100. [PubMed]
  • Altfeld M, Walker BD. Less is More? STI in acute and chronic HIV-1 infection. Nat Med. 2001;7:881–884. [PubMed]
  • Breslin FC, Sobell MB, Sobell LC, Cunningham JA, Sdao-Jarvie K, Borson D. Problem drinkers: evaluation of a stepped-care approach. J Subst Abuse. 1999;10(3):217–232. [PubMed]
  • Brooner RK, Kidorf M. Using Behavioral Reinforcement to Improve Methadone Treatment Participation. Sci Pract Perspect. 2002;1:38–48. [PMC free article] [PubMed]
  • Brown BS. Drug use – Chronic and Relapsing or a Treatable Condition. Subst Use Misuse. 1998;33(12):2215–2520. [PubMed]
  • Byar DP. Why data bases should not replace randomized clinical trials. Biometrics. 1980;36(2):337–342. [PubMed]
  • Byar DP. The use of data bases and historical controls in treatment comparisons. Recent Results Cancer Res. 1988;111:95–98. [PubMed]
  • Byar DP. Problems with Using Observational Databases to Compare Treatments. Stat Med. 1991;10:663–666. [PubMed]
  • Chick J, Anton R, Checinski K, Croop R, Drummond DC, Farmer R, Labriola D, Marshall J, Moncrieff J, Morgan MY, Peters T, Ritson B. A multicentre, randomized, double-blind, placebo-controlled trial of naltrexone in the treatment of alcohol dependence or abuse. Alcohol Alcohol. 2000;35(6):587–593. [PubMed]
  • Collins LM, Murphy SA, Bierman KA. A Conceptual Framework for Adaptive Preventive Interventions. Prev Sci. 2004;5:185–196. [PMC free article] [PubMed]
  • Collins LM, Murphy SA, Nair V, Strecher V. A Strategy for Optimizing and Evaluating Behavioral Interventions. Ann Behav Med. 2005;30:65–73. [PubMed]
  • Donovan DM. Continuing Care: Promoting the Maintenance of Change. In: Miller WR, Heather N, editors. Treating Addictive Behaviors. 2. Plenum; New York: 1998. pp. 317–336.
  • Green SB, Byar DP. Using Observational Data from Registries to Compare Treatments: The Fallacy of Omnimetrics. Stat Med. 1984;3:361–370. [PubMed]
  • Hser YI, Anglin MD, Grella C, Longshore D, Prendergast ML. Drug Treatment Careers: A Conceptual Framework and Existing Research Findings. J Subst Abuse Treat. 1997;14(6):543–558. [PubMed]
  • ICH (International conference on harmonization of technical requirements for registration of pharmaceuticals for human use) ICH harmonized tripartite guideline: statistical principles for clinical trials. Stat Med. 1999;18:1905–1942. [PubMed]
  • Kidorf M, Neufeld K, Brooner RK. Combining stepped care approaches with behavioral reinforcement to motivate employment in opioid-dependent outpatients. Subst Use Misuse. 2004;39(13–14):2215–2238. [PubMed]
  • Lavori PW, Dawson R. A design for testing clinical strategies: biased individually tailored within-subject randomization. J R Stat Soc Ser A Stat Soc. 2000;163:29–38.
  • Lavori PW, Dawson R, Rush AJ. Flexible treatment strategies in chronic disease: Clinical and research implications. Biol Psychiatry. 2000;48:605–614. [PubMed]
  • Lavori PW, Rush AJ, Wisniewski SR, Alpert J, Fava M, Kupfer DJ, Nierenberg A, Quitkin FM, Sackeim HA, Thase ME, Trivedi M. Strengthening Clinical Effectiveness Trials: Equipoise-stratified Randomization. Biol Psychiatry. 2001;50(10):792–801. [PubMed]
  • Lavori PW, Dawson R. Dynamic treatment regimes: practical design considerations. Clin Trials. 2004;1(1):9–20. [PubMed]
  • Lunceford J, Davidian M, Tsiatis AA. Estimation of the survival distribution of treatment regimes in two-stage randomization designs in clinical trials. Biometrics. 2002;58:48–57. [PubMed]
  • McKay JR, Lynch KG, Shepard DS, Ratichek S, Morrison R, Koppenhaver J, Pettinati HM. The effectiveness of telephone-based continuing care in the clinical management of alcohol and cocaine use disorders: 12-month outcomes. J Consult Clin Psychol. 2004;72(6):967–979. [PubMed]
  • McLellan AT. Editorial: Have we evaluated addiction treatment correctly? Implications from a chronic care perspective. Addiction. 2002;97:249–252. [PubMed]
  • McLellan AT, Lewis DC, O’Brien CP, Kleber HD. Drug dependence, a chronic medical illness: Implications for treatment, insurance, and outcomes evaluation. JAMA. 2000;284:1689–1695. [PubMed]
  • Mee-Lee D, Gartner L, Miller MM, Shulman GD. Patient Placement Criteria for the Treatment of Substance-Related Disorders. 2. ASAM PPC-2; Chevy Chase, MD: 1996.
  • Miller WR, editor. DHHS Publication No. (NIH) 04–5288. Bethesda, MD: NIAAA; 2004. COMBINE Monograph Series, Volume 1, Combined Behavioral Intervention Manual: A Clinical Research Guide for Therapists Treating People With Alcohol Abuse and Dependence.
  • Murphy SA, van der Laan MJ, Robins JM. CPPRG. Marginal Mean Models for Dynamic Regimes. J Am Stat Assoc. 2001;96:1410–1423. [PMC free article] [PubMed]
  • Murphy SA. Optimal Dynamic Treatment Regimes. J R Stat Soc Ser B Methodol. 2003;65(2):331–366.
  • Murphy SA. An experimental design for the development of adaptive treatment strategies. Stat Med. 2005;24:1455–1481. [PubMed]
  • Murphy SA, Oslin DW, Rush AJ, Zhu J. for MCATS. Methodological Challenges in Construction Effective Treatment Sequences for Chronic Disorders. Neuropsychopharmacology. 2006 in press. [PubMed]
  • Oslin DW, Sayers S, Ross J, Kane V, Ten Have T, Conigliaro J, Cornelius J. Disease management for depression and at-risk drinking via telephone in an older population of veterans. Psychosom Med. 2003;65(6):931–937. [PubMed]
  • O'Brien CP, McLellan AT. Myths About the Treatment of Addiction. Lancet. 1996;347:237–240. [PubMed]
  • Pettinati HM, Volpicelli JR, Pierce JD, Jr, O'Brien CP. Improving naltrexone response: an intervention for medical practitioners to enhance medication compliance in alcohol dependent patients. J Addict Dis. 2000;19(1):71–83. [PubMed]
  • Pettinati HM, Weiss RD, Miller WR, Donovan D, Ernst DB, Rounsaville BJ. DHHS Publication No. (NIH) 04–5289. Bethesda, MD: NIAAA; 2004. COMBINE Monograph Series, Volume 2, Medical Management Treatment Manual: A Clinical Research Guide for Medically Trained Clinicians Providing Pharmacotherapy as Part of the Treatment for Alcohol Dependence.
  • Pineau J, Ghizaru A, Gendron-Bellemare M, Rush AJ, Murphy SA. Improving the management of chronic disorders by learning adaptive treatment strategies. 2006. Submitted.
  • Robins JM. Optimal structural nested models for optimal sequential decisions. In: Lin DY, Haegerty P, editors. Proceedings of the Second Seattle Symposium on Biostatistics. Springer Verlag; New York: 2004. pp. 189–326.
  • Rochon J. Issues in adjusting for covariates arising postrandomization in clinical trials. Drug Inf J. 1999;33:1219–1228.
  • Rush AJ, Crismon ML, Kashner TM, Toprac MG, Carmody TJ, Trivedi MH, Suppes T, Miller AL, Biggs MM, Shores-Wilson K, Witte BP, Shon SP, Rago WV, Altshuler KZ. TMAP Research Group. Texas Medication Algorithm Project, phase 3 (TMAP-3): Rationale and Study Design. J Clin Psychiatry. 2003;64(4):357–369. [PubMed]
  • Schneider LS, Tariot PN, Lyketsos CG, Dagerman KS, Davis KL, Davis S, Hsiao JK, Jeste DV, Katz IR, Olin JT, Pollock BG, Rabins PV, Rosenheck RA, Small GW, Lebowitz B, Lieberman JA. National Institute of Mental Health Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Am J Geriatr Psychiatry. 2001;9(4):346–360. [PubMed]
  • Sobell MB, Sobell LC. Stepped Care as a Heuristic Approach to the Treatment of Alcohol Problems. J Consult Clin Psychol. 2000;68(4):573–579. [PubMed]
  • TenHave TR, Coyne J, Salzer M, Katz I. Research to improve the quality of care for depression: alternatives to the simple randomized clinical trial. Gen Hosp Psychiatry. 2003;25:115–123. [PubMed]
  • Thall PF, Millikan RE, Sung HG. Evaluating multiple treatment courses in clinical trials. Stat Med. 2000;19:1011–1028. [PubMed]