Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Subst Abuse Treat. Author manuscript; available in PMC 2010 March 1.
Published in final edited form as:
PMCID: PMC2692409

Adolescent-only substance abuse treatment: Availability and adoption of components of quality


There are few studies of the availability and quality of adolescent-only treatment programs. Drawing upon existing samples of publicly and privately funded treatment programs, this research considers whether organizational characteristics are associated with the availability of adolescent-only programming and measures components of quality within these programs. Significant organizational correlates of adolescent-only services included organizational size, location within a hospital setting, center accreditation, adherence to a 12-step treatment model, and reliance on public sources of funding. In-depth interviews were then conducted with 154 managers of adolescent-only treatment programs regarding levels of care offered and service quality. The most prevalent levels of care were standard outpatient and intensive outpatient. Analysis of nine domains of treatment quality revealed a medium level of quality. Treatment quality was significantly greater in programs offering more intensive levels of care. These results are largely consistent with other recent research and suggest a need for continued quality improvement efforts in this treatment sector.

Keywords: adolescent substance abuse treatment, quality, health services research

1. Introduction

Research has repeatedly demonstrated that the use and abuse of alcohol, illicit drugs, and tobacco by adolescents is a major health problem in the US (Johnston et al., 2002; Physician Leadership on National Drug Policy, 2002). Substance abuse increases the probability that adolescents will experience a variety of negative consequences, including risks to physical and mental health (Delaney, Broome, Flynn, & Fletcher, 2001; Dennis, Dawud-Noursi, Muck, & McDermeit, 2003; Diamond et al., 2002). Data from the Treatment Episode Data Set (TEDS) documented an increase in adolescent treatment admissions from almost 99,000 in 1993 to about 156,000 in 2003 (Substance Abuse and Mental Health Services Administration, 2004a), although data from 2006 indicates a tapering to about 139,000 admissions (SAMHSA, 2007b). However, only about 10% of adolescents with needing treatment actually enter treatment (SAMHSA, 2007a), suggesting a sizable gap between needed treatment and available services.

This gap raises questions about the intertwined issues about the availability and quality of adolescent treatment services. Adolescents needing treatment face two organizational barriers to treatment entry, namely programs’ admission policies and the limited availability of adolescent-only treatment programming. Access to treatment is reduced by explicit policies in some programs that exclude adolescents from admission (SAMHSA, 2004b). Furthermore, the lack of adolescent-only treatment programs further limits adolescents’ access to high-quality services. Historically, adolescents were integrated into programs that served adults (National Institute on Drug Abuse, 1983). Such integration failed to recognize the unique needs of adolescents due to their stage of psychological and physical development (Etheridge, Smith, Rounds-Bryant, & Hubbard, 2001; Flanzer, 2005; Hser et al., 2001), and that specific problems in family functioning may be tied to the adolescent’s substance abuse (Coatsworth, Santisteban, McBride, & Szapocznik, 2001). Based on their unique needs, the Center for Substance Abuse Treatment (1999) advocates the separation of adolescent and adult substance abuse treatment services, suggesting that adolescent-only services represent a minimum threshold of treatment quality.

Analyses of data from the National Survey of Substance Abuse Treatment Services (N-SSATS) highlight the complicated issues related to the availability of adolescent treatment. In 2003, SAMHSA reported that only 52% of facilities admitted adolescent clients, and 32% of all facilities offered “programs or groups” for adolescents (SAMHSA, 2004b). A closer analysis of these same data by Mark et al. (2006) revealed even more restricted availability of specialty services. Mark and colleagues first examined whether programs had at least ten adolescent clients receiving treatment on the day that the N-SSATS survey was completed. Only 18.3% of facilities met this criterion. Just 2,167 of 13,623 facilities (15.9%) reported offering an adolescent-only treatment program or group (and having ten adolescents enrolled). To date, it is unclear if organizational characteristics differentiate facilities providing adolescent-only treatment services from those that exclude adolescents from admission or only offered integrated adolescent and adult programming.

It is clear that adolescents who present for treatment often have complex needs, requiring an array of services. Differences between adolescents and adults in psychological development result in the need for therapeutic approaches focusing on concrete behavioral issues rather than more abstract forms of reasoning (CSAT, 1999). This population is at high risk of co-occurring mental health problems, such as depression, anxiety, attention deficit/hyperactivity disorder, and conduct disorders (Chan, Dennis, & Funk, 2008; Rowe, Liddle, Greenbaum, & Henderson, 2004). They are also at heightened risk of contracting HIV/AIDS as well as other sexually transmitted diseases (Deas-Nesmith, Brady, White, & Campbell, 1999; Joshi, Hser, Grella, & Houlton, 2001). Adolescents in substance abuse treatment often need educational, employment, legal, and other medical services (Etheridge et al., 2001). Complex issues related to family functioning (Battjes et al., 2004; Deas & Thomas, 2001), gender-specific needs (Rowe et al., 2004; Winters, 1999), and cultural background (Castro & Alarcon, 2002; Coatsworth et al., 2001) may have relevance in their substance use behaviors. While treatment has a positive effect on outcomes (Brown, D’Amico, McCarthy, & Tapert, 2001; Hser et al., 2001; Winters, Stinchfield, Opland, Weller, & Latimer, 2000), attention to continuing care is necessary because a high percentage of treated adolescents relapse following treatment (Winters, 1999). Addressing these complex needs requires thorough assessment and multi-faceted treatment services delivered by qualified staff (Winters, 1999).

Despite some expansion in research on the effectiveness of interventions for substance-abusing adolescents (Deas & Thomas, 2001; Dennis et al., 2004; Muck et al., 2001; Riggs, 2003; Waldron & Kaminer, 2004), few studies have examined the content or quality of “treatment as usual” in community-based adolescent treatment programs. Early work on service delivery using data collected from 37 programs as part of the Drug Abuse Treatment Outcome Studies for Adolescents (DATOS-A) revealed substantial variation in levels of care, treatment techniques, and comprehensive wraparound services (Etheridge et al., 2001). Relative to data collected in the 1970s, DATOS-A also revealed changes in the adolescent treatment system (Etheridge et al., 2001), suggesting an ongoing need to monitor service delivery within this treatment sector.

Two recent studies of service delivery in adolescent treatment have drawn on facility-level data from the N-SSATS dataset. Olmstead & Sindelar (2004) estimated the correlations between whether a facility served any adolescent clients and the adoption of three treatment practices endorsed by CSAT’s Treatment Improvement Protocol (TIP) for adolescent treatment: family therapy, aftercare counseling, and mental health assessment. They found high rates of family therapy and aftercare counseling in these adolescent-serving facilities, with more modest rates of mental health assessment. Mark et al. (2006) examined a wide variety of treatment characteristics and components in facilities that had at least ten adolescent clients. In general, these data revealed modest levels of adoption of specific treatment components indicative of high-quality services. Both of these analyses used facility-level data so neither study was descriptive of services specifically delivered within the context of adolescent-only programs. Neither study constructed aggregate measures of quality, making it difficult to characterize the overall quality of services within these programs.

The first major study focusing on the content of adolescent-only care and aggregate indicators of treatment quality was conducted by Brannigan, Schackman, Falco, and Millman (2004). Their research examined the quality of adolescent treatment across nine domains in 144 programs that are “highly regarded” by experts in the treatment field (Drug Strategies, 2003). A major contribution of this work was the identification of nine domains of quality: 1) assessment and treatment matching; 2) a treatment approach that is comprehensive with regard to the adolescent’s life; 3) family involvement in treatment; 4) developmentally appropriate programming; 5) strategies to engage and retain adolescents in treatment; 6) the hiring of qualified staff; 7) the tailoring of treatment to address gender and cultural differences; 8) continuing care; and 9) program evaluations of treatment outcomes. Even within this non-random sample of “highly regarded” adolescent treatment programs, there was substantial variability in the extent to which programs contained these elements, with the average program endorsing about half of the 45 program components measured in the study.

The limited number of studies on the availability, content, and quality of adolescent-only treatment means that several key questions remain. First, to what extent is adolescent-only treatment available, and is availability associated with organizational characteristics? What levels of care are available? How much clinical contact and what types of sessions are included in the design of these levels of care? To what extent have these programs adopted components related to the nine domains of quality care? Finally, what organizational and service delivery characteristics are associated with overall treatment quality?

2. Methods

2.1. Samples and Study Eligibility

This research utilizes two existing samples of specialty addiction treatment organizations that are part of the National Treatment Center study (NTCS). Between 2002 and 2004, nationally representative samples of publicly funded centers and privately funded centers were constructed, and face-to-face interviews were conducted to collect organization-level data about the addiction treatment services offered by the center. For both samples, organizations were required to offer a minimum level of care at least equivalent to standard outpatient for the treatment of alcohol and drug abuse (Mee-Lee et al., 1996) and to be open to the public. Program funding was the key distinction that delineated the two samples. Privately funded programs were required to receive less than half of their revenues from government block grants and contracts, while publicly funded programs were those that received more than half of their revenues from these sources. Of organizations deemed eligible based on a brief telephone screening interview, 80% of publicly funded (n = 363) and 88% of privately funded programs (n = 403) participated in this earlier study. A thorough description of the sampling methodology can be found in Knudsen, Ducharme, and Roman (2007).

These treatment organizations served as the population contacted for the present study of adolescent-only treatment programs. (Because a few centers indicated that they had multiple administratively distinct adolescent-only treatment programs, the final pool contained 770 treatment centers). All were contacted by telephone in order to establish whether the organization was still delivering substance abuse treatment services and whether they were eligible for study. Based on this screening process, centers were coded into six eligibility categories. Eligibility for the present study was based on two criteria. First, treatment organizations were required to admit clients of 18 years or younger. Of the 770 centers, 39.7% (n = 306) did not admit clients aged 18 or younger, so these programs were coded as “non-admitting centers.” Second, the organization was required to offer at least one adolescent-only treatment program that was, at a minimum, equivalent to standard outpatient treatment. About 21.8% (n = 168) of centers admitted adolescents but did not offer a separate program for them, so they were coded as “integrated centers.” About 4.8% of these organizations were no longer offering any substance abuse treatment services (n = 37), so these were coded as “closed centers.” After repeated attempts, about 3.9% (n = 30) were unable to be contacted so eligibility could not be established; these centers were coded as “soft refusals.” Finally, about 9.7% (n = 75) of these organizations met eligibility criteria for participation but refused to participate; these centers were coded as “hard refusals.”

Programs managers reporting that their organization admitted clients under the age of 18 and offered a separate adolescent-only level of care were invited to participate in a telephone interview. Of the 229 programs deemed eligible based on the telephone screening process, 154 participated in the telephone interviews (67.2%). Participating treatment programs received a $30 honorarium. These interviews were conducted between July 2005 and March 2007.

2.2. Measures

2.2.1. Organizational characteristics

Eight variables measuring basic organizational characteristics were collected during the face-to-face interviews conducted between 2002 and 2004. First, centers were coded for sample type, indicating whether the organization was part of the publicly funded sample (coded 1) or privately funded sample (coded 0). In addition, ownership was measured in terms of whether the center was government-owned (coded 1) or owned by a non-governmental entity (coded 0). Profit status was also measured (1 = for-profit, 0 = non-profit). Organizational affiliation was coded into three categories: hospital-based programs, community mental health center-based programs, and freestanding programs as the reference category. Center accreditation was indicated by accreditation by either the Joint Commission on Healthcare Organizations or the Commission on the Accreditation of Rehabilitation Facilities (1 = accredited, 0 = not accredited). Center administrators identified whether the center used a 12-step treatment model (1 = 12-step, 0 = other treatment model). Center size was measured by the number of full-time equivalent employees at the center while center age was measured in years. The distributions of size and age revealed substantial skew, so both variables were natural log-transformed.

2.2.2. Adolescent-only levels of care

Program managers were asked about the levels of adolescent-only care offered, including inpatient, residential, partial hospitalization, intensive outpatient, and standard outpatient. For each available level of care, managers were asked about the weekly frequency of four types of sessions: individual therapy, group therapy, family therapy, and educational sessions. For the outpatient levels of care, managers were also asked to describe the amount of clinical contact in terms of the number of hours per day and number of days per week included in the program’s design. These two measures were multiplied in order to estimate the weekly number of clinical contact hours.

2.2.3. Dimensions of treatment quality

Influenced by the work of Brannigan et al. (2004), this research includes 43 treatment components which were used to create nine additive measures of treatment quality. The specific components within each domain are presented in Table 3. For each dichotomous indicator, programs were coded 1 if the component was present and 0 if the component was not present. All 43 components were also summed into a measure of overall treatment quality. This method of aggregating data has the advantage of distilling a large number of indicators into substantive domains, and has been applied in other health services research (Ducharme et al., 2007; Henderson et al., 2007; Knudsen & Roman, 2004).

Table 3
Frequency statistics for specific indicators of treatment quality in adolescent-only treatment programs

2.3. Data Analysis

Multinomial logistic regression is a useful tool for examining associations between independent variables and dependent variables that consist of several categories (Long, 1997; Long & Freese, 2003). Standardized coefficients can be expressed as relative risk ratios, representing the extent to which an independent variable increases or decreases the odds of being in a particular category relative to the odds of being in a reference category. In the context of this research, a multinomial logistic regression model addresses two issues simultaneously. First, this analytic technique allows for an examination of potential differences between participating organizations and those that did not participate, either by refusal or being unable to be contacted. In essence, this part of the analysis addresses the issue of response bias. It has the added benefit of identifying characteristics of centers that have closed since the previous round of face-to-face interviews. The second issue that such a model can address is substantively meaningful. The model estimates whether organizational characteristics are predictive of the odds that a center excludes adolescents from admission or integrates adolescents with adults in their programming, relative to the odds that the center offered an adolescent-only program (i.e. was eligible and participated in the study).

The remaining analyses focused on the available levels of adolescent-only care and treatment quality. Descriptive statistics related to the adolescent-only levels of care and the measures of treatment quality were calculated. In addition, ordinary least squares (OLS) regression was used to examine whether organizational characteristics and available levels of care are associated with the overall measure of treatment quality. All analyses were conducted using Stata 10.0 (StataCorp, College Station, TX).

3. Results

3.1. Multinomial logistic regression model of study eligibility and participation

A multinomial logistic regression model estimated the associations between the eight organizational characteristics and how the center was categorized after the brief telephone screening for study eligibility. Participating centers offering at least one level of adolescent-only care were the reference category in this multivariate analysis. None of the organizational characteristics were significantly associated with the odds that the center was a hard refusal, relative to the odds of participating in the study. (Full results are available by request.) Likewise, none of the organizational characteristics were associated with the odds that the center was a soft refusal. These findings suggest that non-participating centers, either due to hard or soft refusals, were not significantly different from participating centers, which reduces concern about response bias.

Four of the organizational characteristics were associated with the odds that treatment centers exclude adolescents from admission. Compared to participating centers with at least one adolescent-only level of care, publicly funded centers were more likely than privately funded programs to exclude adolescents from admission (relative risk ratio, RRR = 1.93, p<.05). In addition, hospital-based programs were more likely than freestanding facilities to exclude adolescents from admission (RRR = 3.09, p<.001). Being an accredited center lowered the likelihood that centers did not admit adolescents (RRR = .42, p<.01) as did center size, such that larger centers were less likely to bar adolescents from admission (RRR = .78, p<.01).

Two variables were associated with the likelihood that the center only offered integrated adolescent and adult services rather than adolescent-only programming. Centers based on a 12-step treatment model were more likely than centers based on other treatment models to combine adolescent clients with adults in an integrated program (RRR = 1.69, p<.05). Larger centers were less likely than smaller centers to offer integrated adolescent and adult programming (RRR = .73, p<.01).

The last comparison in the multinomial logistic regression model was between participating centers and those centers that had closed. Two variables were significant. First, being based in a hospital rather than being a freestanding facility was a risk factor for center closure. The odds of closure, relative to the odds of being an eligible and participating center, was three times greater in hospital-based programs than in freestanding facilities (RRR = 3.61, p<.05). In contrast, center size appeared to be protective against the likelihood of center closure, such that centers with more FTE employees were less likely than smaller centers to have closed (RRR = .59, p<.01).

3.2. Available levels of adolescent-only care

The most common levels of care offered by these adolescent-only programs were standard outpatient (69.1%) and intensive outpatient (50.7%). More intensive levels of care were less common. About 19.1% of programs offered a residential level of adolescent-only care, 15.1% of programs offered an inpatient level of care, and 12.5% of programs offered partial hospitalization. The availability of three of these five levels of care varied between privately funded and publicly funded treatment programs. Privately funded programs were much more likely to offer inpatient treatment (25.6%) than publicly funded programs (2.9%; χ2(1) = 15.22, p<.001). These privately funded programs were also more likely to offer partial hospitalization (20.7%) than publicly funded programs (2.9%, χ2(1) = 11.03, p<.001). However, publicly funded programs were significantly more likely to offer standard outpatient (81.4%) than privately funded programs (58.5%; χ2(1) = 9.26, p<.01).

Table 1 presents descriptive information about key aspects of these levels of care, including amount of clinical contact per week and weekly frequency of different types of sessions. The more intensive levels of care rely heavily on group therapy sessions and educational sessions, with less emphasis on family sessions and individual therapy sessions. With the exception of standard outpatient, there tended to be about three times more group therapy sessions than individual sessions. However, these levels of care did average at least one individual session per week. For standard outpatient, these differences are smaller in magnitude, most likely because there are fewer total hours of clinical contact per week.

Table 1
Characteristics of adolescent-only levels of care

Comparisons of the publicly funded and privately funded programs on weekly contact hours and counseling sessions across the levels of care revealed modest differences. (No comparisons were made for inpatient and partial hospitalization because these were nearly exclusively privately funded programs). There was not a significant public-private difference in average contact hours for either intensive outpatient or standard outpatient. For residential programs, the only difference was that privately funded programs included more educational sessions per week (mean = 8.14, SD = 5.48) than publicly funded programs (mean = 4.33, SD = 4.75, t(27) = 2.01, p<.06). Within the intensive outpatient level of care, there were no public-private differences in the four types of weekly sessions. For the standard outpatient level of care, there were two significant public-private differences. Specifically, privately funded outpatient programs averaged more family therapy sessions per week (mean = 0.84, SD = 0.53; t(98) = 2.47, p<.05) and more educational sessions per week (mean = 0.93, SD = 0.71; t(97) = 2.72, p<.01) than publicly funded programs (mean family sessions = 0.56, SD = 0.57; mean educational sessions = 0.56, SD = 0.63).

3.3. Treatment quality in adolescent-only care

An overall measure of quality in adolescent-only addiction treatment was constructed by summing the specific components from the nine domains. Although this measure had a possible range of 0 to 43 components, the data revealed an actual range of 12 to 34. The mean was 21.60 (SD = 5.20). With the average score being situated at about the midpoint of the scale, it suggests a medium level of overall quality.

Descriptive statistics for the nine domains of treatment quality appear in Table 2, while the frequencies for the indicators within these domains are presented in Table 3. Most of the nine domains had averages that were near the midpoints of these measures of treatment quality. A notable exception was the measure of comprehensive integrated services. The average for this measure was considerably lower than the midpoint, suggesting that the average center offered only about one-third of these wraparound services.

Table 2
Descriptive statistics for domains of treatment quality in adolescent-only treatment programs

In addition to considering averages, it is also useful to consider the percentage of programs at the maximum for each domain to better understand the percentages of programs attaining very high levels of quality on these measures. Only a limited percentage of programs achieved the maximum values for the nine domains. For the measure of assessment and treatment matching, about 23.7% of the sample met all four indicators, meaning their assessments included standardized measures of the three domains of substance abuse, psychological functioning, and family relationships, they routinely screened for four common co-occurring psychiatric conditions, they used the ASAM criteria, and multiple levels of adolescent-only care were available. The use of manual-guided motivational approaches to engagement and retention was fairly prevalent. About 30.7% of programs reported using both manual-based motivational interviewing and manual-based MET. The dimension representing a comprehensive treatment approach was less strongly endorsed. Just 8.6% of programs offered seven or more of the wraparound services, and only one program reported offering all nine services. For the measure of family involvement, about 22.5% of programs endorsed all four family involvement indicators. About 9.3% of programs endorsed all five items related to developmental appropriateness of their programming, while 9.9% of programs endorsed all five of the measures of gender and cultural competence. For the dimension of qualified staff, just four programs (2.6%) reported having at least one staff member in all seven categories. Very few programs (2.0%) endorsed all six continuing care indicators. The dimension of treatment outcomes included two items about conducting follow-up studies of treatment outcomes and participation in program evaluations. Less than one-fifth of programs (18.5%) indicated that they have engaged in both types of research.

In an additional analysis, publicly funded adolescent programs were compared to privately funded programs on the nine treatment quality domains using t-tests. Two of the nine comparisons were statistically significant. First, privately funded programs endorsed more of the family involvement measures (mean = 2.43, SD = 1.42) than publicly funded programs (mean = 1.90, SD = 1.21, t(149) = 2.46, p<.05). Privately funded programs also reported having qualified staff in a greater number of categories (mean = 3.90, SD = 1.51) than publicly funded programs (mean = 3.11, SD = 1.57, t(150) = 3.143, p<.01). The comparison between these two types of programs on the dimension of continuing care approached significance (private program mean = 3.39, SD = 1.16; public program mean = 3.00, SD = 1.35; t(150) = 1.91, p = .058).

3.4. Associations between treatment quality, organizational characteristics, and available levels of care

A series of bivariate ordinary least squares (OLS) regression models of overall treatment quality were estimated for the eight organizational characteristics. Three of the bivariate associations were statistically significant. First, publicly funded treatment programs scored significantly lower than privately funded programs on this overall measure of quality (β = −.21, p<.05). At the bivariate level, accredited programs reported significantly greater overall quality than non-accredited programs (β = .27, p<.01). Organizational size was positively associated with overall quality (β = .31, p<.001).

Next, a multivariate OLS regression model of overall treatment quality was estimated using the significant organizational characteristics and the five levels of care, which is presented in Table 4. Once the levels of care were entered into the model, the three organizational characteristics were no longer statistically significant. The association for organizational size trended towards, but did not achieve, statistical significance (p = .095). Programs offering residential treatment had greater overall treatment quality compared to programs without residential services, after controlling for organizational characteristics and the other levels of care. Similarly, treatment quality was greater in programs with inpatient services relative to those without inpatient care. The presence of partial hospitalization or intensive outpatient care was positively associated with overall treatment quality. However, the relationship between overall treatment quality and the availability of standard outpatient treatment was not significant.

Table 4
OLS regression model of overall treatment quality on organizational characteristics and levels of care

4. Discussion

This research extends the existing literature on adolescent-only substance abuse treatment by examining service delivery within existing national random samples of publicly and privately funded programs. Consistent with the work of Mark et al. (2006), these data revealed a low prevalence of adolescent-only treatment services. Similar to the results of the 2003 N-SSATS (SAMSHA, 2004b), a substantial percentage of treatment organizations excluded adolescents from admission as a matter of program policy. Additionally, a significant percentage of organizations did not offer an adolescent-only program, but instead integrated adolescents into programs serving adults. Such integration is contrary to the recommendations of CSAT (1999).

This research then considered whether organizational characteristics were associated with the odds that adolescents were excluded from admission or integrated into a program with adults, relative to the odds of offering an adolescent-only program. Several organizational characteristics were relevant. First, organizational size was significant, such that smaller organizations were more likely to either exclude adolescents from admission or to combine them with adults. Smaller numbers of staff likely place practical limits on the number of programs that can be delivered within a single organization. Location within a hospital setting appeared to be an access barrier in two ways. Hospital-based treatment centers were more likely to exclude adolescents from admission, and they were more likely to have closed. To some extent, hospitals may admit adolescents needing inpatient care to psychiatric units that are separate from their substance abuse treatment units; unfortunately, these data cannot assess how frequently this was the case. The finding regarding heightened risk of closure in hospital-based addiction programs is consistent with previous research (Knudsen, Roman, & Ducharme, 2005). Additionally, center accreditation reduced the likelihood that adolescents were excluded from admission, although it was not associated with the odds of the integration of adolescents with adults. While twelve-step programs were no more or less likely to exclude adolescents from admission, twelve-step programs were more likely than programs based on other models to integrate adolescents with adults.

The most prevalent levels of care were standard and intensive outpatient treatment, which is consistent with broader trends in the substance abuse treatment system. Adolescent-only standard outpatient, intensive outpatient, and partial hospitalization were offered by 69%, 51%, and 13% of these organizations. Data from the 2006 N-SSATS places the facility-level prevalence of these levels of care at 74%, 45%, and 15% (SAMHSA, 2007). These similarities provide support for the representativeness of these data.

There were more frequent clinical sessions in the more intensive levels of care. With increasing treatment intensity, the number of weekly group sessions and psycho-educational sessions became particularly frequent, raising two issues. The first issue relates to the use of group therapy for adolescents. Some have argued that group therapy may actually be detrimental for certain adolescents, although the average main effect for group treatment interventions is often in the positive direction (Waldron & Kaminer, 2004). For example, Battjes et al. (2004) found that while a group treatment intervention was beneficial for marijuana users with more severe use histories, the intervention was associated with greater marijuana use at follow-up for those adolescents who entered treatment with less severe use histories. While there is more research needed on group therapies (Kaminer, 2005), future health services research should also consider what steps are taken, if any, when treatment programs organize therapy groups, such as dividing groups into high or low intensity users. Alternatively, it may be important to study if treatment programs actively train counselors in group facilitation in order to reduce potential iatrogenic effects.

The second issue raised by these data on clinical sessions relates to the content of the psycho-educational sessions, which represents an important direction for future research. It is not known what topics were covered in these educational sessions and whether such sessions utilize active versus passive models of learning. To some extent, the data on the indicators of developmental appropriateness suggest that some attention is paid to creating a more interactive learning environment in which skills are practiced. However, these measures only offer indirect information about the educational components of adolescent substance abuse treatment. Given the increased frequency of educational sessions in the more intensive levels of care, future research should explore in greater detail what types of information are being communicated and whether the communication methods used are indicative of good clinical practice.

The aggregation of data on the 43 components into nine domains of treatment quality revealed medium levels of quality, with most of these scales having averages near their midpoints. The aggregate measure consisting of all 43 components similarly had an average of about half of the components. None of the programs approached the maximum possible value of this scale. The greatest number of actual components reported by a program was 34, representing about 79% of the possible components. These data suggest that there is room for continued efforts for quality improvement the treatment sector that serves adolescent clients.

Comparisons of publicly funded programs and privately funded programs only yielded a few significant differences. The first difference was in the nearly exclusive concentration of inpatient and partial hospitalization programs within the privately funded treatment sector. There was also a significant difference in the overall number of quality components reported by these two samples of programs. While significant and indicating higher average quality in privately funded programs, the difference was not particularly large in magnitude; the difference was about two components. This finding is consistent with recent work by Schackman et al. (2007) who found a positive association between private insurance referrals and a 20-component quality scale among outpatient treatment providers. This public-private difference, however, was no longer significant once levels of care, organizational size, and center accreditation were controlled.

There was evidence of treatment quality being positively associated with more intensive levels of care. The strongest difference was for residential programs, with those offering this level of care scoring significantly higher than programs without residential services. Inpatient and partial hospitalization levels of care were also positively associated with quality. These levels of care are nearly exclusively represented within the privately funded sample, which may explain why the public-private difference at the bivariate-level was no longer significant in the multivariate regression model. Notably, these more intensive levels of care were much less available than standard outpatient treatment, which was the predominant level of care.

Several limitations of this research should be noted. First, the response rate was lower than desired, which raises the risk of bias due to non-response. This issue was evaluated in the multinomial logistic regression analysis, which did not reveal significant differences in basic organizational characteristics between refusing centers and those that participated. However, there may be other characteristics associated with research participation, so it is not possible to fully resolve the issue of response bias.

A second limitation is that these results are not directly comparable to Brannigan et al. (2004) because of some differences in the specific indicators used to measure treatment quality. This study was not intended to directly replicate the design of that survey. Similar to the approach taken by Mark et al. (2006), Brannigan et al.’s nine domains were used as a framework for organizing a large number of items into conceptual dimensions. Despite these differences in measurement, these data are quite similar to Brannigan et al.’s survey of 144 “highly regarded” adolescent treatment programs. These data from random samples of programs revealed a mean of 21.6 out of 43 components, or 50.2% of the possible components. The average quality rating for Brannigan et al.’s highly regarded programs was 23.8 out of a maximum of 45 components, or 52.9% of the possible components. Both studies suggest that more attention should to be paid to the quality of adolescent-only addiction treatment.

The third notable limitation is that all data were self-reported by program managers, which raises issues related to the validity of their recall and potential bias due to over-reporting of program components. Perhaps the only way to fully avoid this source of bias would be to apply an ethnographic approach of site visits where “treatment as usual” was observed by an external observer; such an approach would be extremely intensive in terms of time and resources. The reliance on self-reports in this study is consistent with the federal surveys of treatment programs (e.g. N-SSATS) and studies conducted by other researchers (Brannigan et al., 2004; Henderson et al., 2007). Furthermore, the data do not lend empirical support to an argument of vast and widespread over-reporting, since the highest ratings on the quality scale only reached about 80% of the maximum score that was possible. Continued attention to how to improve methodologies used by health services researchers is warranted.

Previous research on adolescent treatment has been limited by a reliance on either non-random samples of programs or the federal N-SSATS datasets which do not offer detail on the content of adolescent-only treatment programming. In many ways, these data from random samples of publicly funded and privately funded addiction programs support aspects of the conventional wisdom about adolescent treatment services. The availability of adolescent-only treatment is limited, which raises a significant barrier to helping adolescents who have substance use disorders. Much of the available treatment is delivered on an outpatient basis with group therapy being the most frequent type of clinical session, regardless of the level of care. The average program endorsed only half of the quality components, suggesting the need for continued efforts to improve the quality of usual care in adolescent treatment. Future research is needed to continue to explore the organizational, regulatory, and environmental barriers to the delivery of high quality adolescent-only addiction treatment.


This research was supported by a grant from the Robert Wood Johnson Foundation’s Substance Abuse Policy Research Program (No. 053130). Information on the organizational characteristics of these programs was drawn from data collected through support from the National Institute on Drug Abuse (R01DA13110 and R01DA14482). The author gratefully acknowledges Dr. Paul M. Roman for access to these data.


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.


  • Battjes RJ, Gordon MS, O’Grady KE, Kinlock TW, Katz EC, Sears EA. Evaluation of a group-based substance abuse treatment program for adolescents. Journal of Substance Abuse Treatment. 2004;27:123–134. [PubMed]
  • Brannigan R, Schackman BR, Falco M, Millman RB. The quality of highly regarded adolescent substance abuse treatment programs. Archives of Pediatric & Adolescent Medicine. 2004;158:904–909. [PubMed]
  • Brown SA, D’Amico EJ, McCarthy DM, Tapert SF. Four-year outcomes from adolescent alcohol and drug treatment. Journal of Studies on Alcohol. 2001;62:381–388. [PubMed]
  • Castro FG, Alarcon ED. Integrating cultural variables into drug abuse prevention and treatment with racial/ethnic minorities. Journal of Drug Issues. 2002;33:783–810.
  • Center for Substance Abuse Treatment. Treatment of Adolescents with Substance Abuse Disorders. Rockville, MD: SAMHSA; 1999. (CSAT Treatment Improvement Protocol Series, No. 32. DHHS Publication No. SMA 99–3283)
  • Chan Y, Dennis ML, Funk RR. Prevalence and comorbidity of major internalizing and externalizing problems among adolescents and adults presenting to substance abuse treatment. Journal of Substance Abuse Treatment. 2008;34:14–24. [PMC free article] [PubMed]
  • Coatsworth JD, Santisteban DA, McBride CK, Szapocznik J. Brief strategic family therapy versus community control: Engagement, retention, and an exploration of the moderating role of adolescent symptom severity. Family Process. 2001;40:313–332. [PubMed]
  • Deas-Nesmith D, Brady KT, White R, Campbell S. HIV-risk behaviors in adolescent substance abusers. Journal of Substance Abuse Treatment. 1999;16:169–172. [PubMed]
  • Deas D, Thomas SE. An overview of controlled studies of adolescent substance abuse treatment. The American Journal on Addictions. 2001;10:178–189. [PubMed]
  • Delaney PJ, Broome KM, Flynn PM, Fletcher BW. Treatment service patterns and organizational structures: An analysis of programs in DATOS-A. Journal of Adolescent Research. 2001;16:590–607.
  • Dennis ML, Dawud-Noursi S, Muck R, McDermeit MA. The need for developing and evaluating adolescent treatment methods. In: Stevens SJ, Morral AR, editors. Adolescent Substance Abuse Treatment in the United States: Exemplary Models from a National Evaluation Study. New York: Haworth; 2003. pp. 3–34.
  • Dennis M, Godley SH, Diamond G, Tims FM, Babor T, Donaldson J, Liddle H, Titus JC, Kaminer Y, Webb C, Hamilton N, Funk R. The Cannabis Youth Treatment (CYT) study: Main findings from two randomized trials. Journal of Substance Abuse Treatment. 2004;27:197–213. [PubMed]
  • Diamond G, Godley SH, Liddle HA, Sampl S, Webb C, Tims FM, Myers R. Five outpatient treatment models for adolescent marijuana use: A description of the cannabis youth treatment interventions. Addiction. 2002;97(Supp 1):70–83. [PubMed]
  • Drug Strategies. Treating Teens: A Guide to Adolescent Drug Programs. Washington, DC: Drug Strategies; 2003.
  • Ducharme LJ, Mello H, Roman PM, Knudsen HK, Johnson JA. Service delivery in substance abuse treatment: Reexamining ‘comprehensive’ care. Journal of Behavioral Health Services & Research. 2007;34:121–136. [PubMed]
  • Etheridge RM, Smith JC, Rounds-Bryant JL, Hubbard RL. Drug abuse treatment and comprehensive services for adolescents. Journal of Adolescent Research. 2001;16:563–589.
  • Flanzer J. The status of health services research on adjudicated drug-abusing juveniles: Selected questions and remaining questions. Substance Use & Misuse. 2005;40:887–911. [PubMed]
  • Henderson CE, Young DW, Jainchill N, Hawke J, Farkas S, Davis RM. Program use of effective drug abuse treatment practices for juvenile offenders. Journal of Substance Abuse Treatment. 2007;32:270–290. [PubMed]
  • Hser YI, Grella CE, Hubbard RL, Hsieh SC, Fletcher BF, Brown BS, Anglin MD. An evaluation of drug treatments for adolescents in 4 US cities. Archives of General Psychiatry. 2001;58:689–695. [PubMed]
  • Johnston LL, O’Malley PM, Bachman JG. Monitoring the Future: National Survey Results on Drug Use, 1975–2001. Vol. 2. Washington, DC: National Institute on Drug Abuse; 2002. (NIH Publication No. 02-5107)
  • Joshi V, Hser Y, Grella CE, Houlton R. Sex-related HIV risk reduction behavior among adolescents in DATOS-A. Journal of Adolescent Research. 2001;16:642–660.
  • Kaminer Y. Challenges and opportunities of group therapy for adolescent substance abuse: A critical review. Addictive Behaviors. 2005;30:1765–1774. [PubMed]
  • Knudsen HK, Ducharme LJ, Roman PM. The use of antidepressant medications in substance abuse treatment: The public-private distinction, organizational compatibility, and the environment. Journal of Health & Social Behavior. 2007;48:195–210. [PubMed]
  • Knudsen HK, Roman PM. Modeling the use of innovations in private treatment organizations: The role of absorptive capacity. Journal of Substance Abuse Treatment. 2004;26:353–361. [PubMed]
  • Knudsen HK, Roman PM, Ducharme LJ. Does service diversification enhance organizational survival? Evidence from the private substance abuse treatment system. Journal of Behavioral Health Services & Research. 2005;32:241–252. [PubMed]
  • Long JS. Regression Models for Categorical and Limited Dependent Variables. Thousand Oaks, CA: Sage; 1997.
  • Long JS, Freese J. Regression models for categorical dependent variables using Stata. College Station, TX: StataCorp; 2003.
  • Mark TL, Song X, Vandivort R, Duffy S, Butler J, Coffey R, Schabert VF. Characterizing substance abuse programs that treat adolescents. Journal of Substance Abuse Treatment. 2006;31:59–65. [PubMed]
  • Mee-Lee DL, Gartner L, Miller MM, Shulman GD, Wilford BB. Patient Placement Criteria for the Treatment of Substance-Related Disorders, Second Edition. Chevy Chase, MD: ASAM; 1996.
  • Muck R, Zempolich KA, Titus JC, Fishman M, Godley MD, Schwebel R. An overview of the effectiveness of adolescent substance abuse treatment models. Youth & Society. 2001;33:143–168.
  • National Institute on Drug Abuse. Data from the National Drug and Alcoholism Treatment Utilization Survey (NDATUS): Main Findings for Drug Abuse Treatment Units. Washington, DC: DHHS; 1983. (DHHS Publication No. ADM 83-1284)
  • Olmstead T, Sindelar JL. To what extent are key services offered in treatment programs for special populations? Journal of Substance Abuse Treatment. 2004;27:9–15. [PubMed]
  • Physician Leadership on National Drug Policy. Adolescent Substance Abuse: A Public Health Priority. Providence, RI: Center for Alcohol and Addiction Studies, Brown University; 2002.
  • Riggs PD. Treating adolescents for substance abuse and comorbid psychiatric disorders. Science & Practice Perspectives. 2003;2:18–28. [PMC free article] [PubMed]
  • Rowe CL, Liddle HA, Greenbaum PE, Henderson CE. Impact of psychiatric comorbidity on treatment of adolescent drug users. Journal of Substance Abuse Treatment. 2004;26:129–140. [PubMed]
  • Schackman BR, Rojas EG, Gans J, Falco M, Millman RB. Does higher cost mean better quality? Evidence from highly-regarded adolescent drug treatment programs. Substance Abuse Treatment, Prevention, and Policy. 2007;2 doi: 10.1186/1747-597X-2-23. [PMC free article] [PubMed] [Cross Ref]
  • Substance Abuse and Mental Health Services Administration (SAMHSA) Treatment Episode Data Set (TEDS): 1993–2003. National Admissions to Substance Abuse Treatment Services. Rockville, MD: SAMHSA; 2004a. (DASIS Series: S-29, DHHS Publication No. SMA 05-4118) Accessed on February 19, 2008 from
  • SAMHSA. National Survey of Substance Abuse Treatment Services (N-SSATS): 2003. Data on Substance Abuse Treatment Facilities. Rockville, MD: SAMHSA; 2004b. (DHHS Publication No. SMA 04-3966) Accessed on February 19, 2008, from
  • SAMHSA. Results from the 2006 National Household Survey on Drug Use and Health: National Findings (NSDUH Series H-32, DHHS Publication No. SMA 07-4293) Rockville, MD: SAMHSA; 2007a. Accessed on February 22, 2008 from
  • SAMHSA. Treatment Episode Data Set (TEDS) Highlights—2006 National Admissions to Substance Abuse Treatment Services (OAS Series #S-40, DHHS Publication No. SMA 08-4313) Rockville, MD: SAMSHA; 2007b. Accessed on February 19, 2008, from
  • Waldron HB, Kaminer Y. On the learning curve: The emerging evidence supporting cognitive-behavioral therapies for adolescent substance abuse. Addiction. 2004;99(Suppl 2):93–105. [PMC free article] [PubMed]
  • Winters KC. Treating adolescents with substance use disorders: An overview of practice issues and treatment outcome. Substance Abuse. 1999;20:203–225. [PubMed]
  • Winters KC, Stinchfield RD, Opland E, Weller C, Latimer WW. The effectiveness of the Minnesota Model approach in the treatment of adolescent drug abusers. Addiction. 2000;95:601–612. [PubMed]