Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Prev Sci. Author manuscript; available in PMC 2018 January 1.
Published in final edited form as:
PMCID: PMC5680036

A Meta-Analysis of the Effectiveness of Interactive Middle School Cannabis Prevention Programs


This meta-analysis examines the effectiveness of interactive middle school-based drug prevention programs on adolescent cannabis use in North America, as well as program characteristics that could moderate these effects. Interactive programs, compared to more didactic, lecture style programs, involve participants in skill-building activities and focus on interaction among participants. A systematic literature search was conducted for English-language studies from January 1998 to March 2014. Studies included evaluations using random assignment or a quasi-experimental design of interactive school-based substance use prevention programs delivered to adolescents (aged 12–14) in North American middle schools (grades 6–8). Data were extracted using a coding protocol. The outcomes of interest were post-treatment cannabis use, intent to use, and refusal skills compared across intervention and control groups. Effect sizes (Cohen’s d) were calculated from continuous measures, and dichotomous measures were converted to the d index. A total of 30 studies yielding 23 independent samples were included. The random effects pooled effect size for cannabis use (k=21) was small (d=−0.07, p<0.01) and favorable for the prevention programs. The pooled effect sizes for intention to use (k=3) and refusal skills (k=3) were not significant. Moderator analyses indicated significant differences in program effectiveness between instructor types, with teachers found to be most effective (d =−0.08, p=0.02). The findings provide further support for the use of interactive school-based programs to prevent cannabis use among middle school students in North America.

Keywords: adolescent health, cannabis use, meta-analysis, school-based prevention

Adolescent substance abuse is a critical public health priority in both the United States (US) and Canada. In particular, cannabis use is gaining increasing attention as several US states have passed marijuana legalization laws and there are concerns that prevalence rates of abuse among adolescents may increase (Johnston, O’Malley, Miech, Bachman, & Schulenberg, 2015). According to the 2014 Monitoring the Future survey, the lifetime cannabis use rate for US youth in Grades 8 through 12 was 30.5%, with past-month use continuing to exceed cigarette use among high school seniors, and—for the first time—daily use being higher than cigarette use (Johnston et al., 2015). In Canada, the lifetime cannabis use rate among youth age 15 to 19 years was 25.8%, with the average age of initiation at 15.1 years (Government of Canada, 2015).

Research demonstrates both short and long-term risks associated with cannabis use during the adolescent developmental period. Early cannabis use in adolescence has been linked to advanced drug use and addiction in adulthood (Chen, Storr, & Anthony, 2009; Lopez-Quintero et al., 2011). Moreover, students who use marijuana have been found more likely to have lower grades, less class participation, and poorer attendance rates as compared to non-users (Cox, Zhang, Johnson, & Bender, 2007; Finn, 2012). Marijuana use has also been linked to impairments in cognitive functioning in adolescents through its impacts on working memory performance (Harvey, Sellman, Porter, & Frampton, 2007; Wright, 2015). Some studies also have demonstrated increased risk for developing depression and anxiety later in adulthood after early cannabis use (Patton et al., 2002; Wright, 2015). Therefore, targeting adolescents in middle school may be critical for school prevention efforts that aim to modify expectations, beliefs, and behaviors related to cannabis use (Government of Canada, 2015; Substance Abuse and Mental Health Services Administration, 2014).

Systematic reviews and meta-analyses also support the importance of implementing substance abuse prevention programs during the middle school years (Gottfredson & Wilson, 2003; Norberg, Kezelman, & Lim-Howe, 2013). In particular, the findings of previous meta-analyses have identified that interactive programs, or those programs that involve participants in skill-building and engagement with other participants, are more effective at preventing adolescent substance abuse than didactic programs, or those programs using a lecture-style of program delivery (Cuipers, 2002; Faggiano et al., 2008; Porath-Waller, Beasley, & Beirness, 2010; Tobler et al., 2000). In addition, previous findings have indicated that substance abuse prevention programs delivered by clinicians and peers are also more effective (Faggiano et al., 2008; Gottfredson & Wilson, 2003; Norberg et al., 2013; Porath-Waller et al., 2010). To date, however, only two meta-analyses and two systematic reviews have focused specifically on cannabis use (Norberg et al., 2013; Porath-Waller et al., 2010; Tobler, Lessard, Marshall, Ochshorn, & Roona, 1999; White & Pitts, 1998). None of these cannabis-specific reviews discuss treatment of effects from studies that include more than one treatment condition with a shared control group, which present statistical dependence between effect sizes (Card, 2012; Scammacca, Roberts, & Stuebing, 2014). And, though studies of school-based programs tend to have clustering effects with samples drawn from more than one school, none of the four reviews for cannabis outcomes report adjustments for clustering. Moreover, age categories are mixed in other substance abuse prevention program reviews, making it hard to discern effects of programs delivered specifically during middle school (Norberg et al., 2013; Porath-Waller et al., 2010; Tobler et al., 1999). Only one meta-analysis of school-based substance abuse prevention programs distinguished the specific effects of programs on elementary, middle, and high school students (Gottfredson & Wilson, 2003). Given the limitations of these prior reviews, the current body of rigorous and homogenous studies focused on interactive, skills-based program designs, and the recent cultural shifts and passage of laws related to marijuana use and related prevention efforts, the purpose of this meta-analysis was to evaluate the effectiveness of interactive, school-based North American middle school drug prevention programs on adolescent cannabis use, as well as understand program characteristics that could moderate these effects.


Methods and findings are reported following guidelines outlined in the —Preferred Reporting Items for Systematic Reviews and Meta-analysis statement (Moher, Liberati, Telzlaff, Altman, & the PRISMA Group, 2009). The protocol for this review is available by request from the authors.

Eligibility Criteria

For inclusion in this meta-analysis, studies had to meet the following criteria: 1) evaluation of a middle school-based substance use prevention program; 2) report measures of cannabis use; 3) use random assignment or a quasi-experimental design; 4) conducted in North America; 5) published in English; 6) conducted during 1998 or later; and 7) present sufficient statistical information to calculate effect sizes indexing the magnitude and direction of effects; 8) the program had to include interactive components, such as skill building and student engagement; and 9) the program had to be delivered to adolescents (aged 12–14) in middle school (grades 6–8). Additional details on inclusion criteria definitions are available online.

Studies were excluded from the meta-analysis for six reasons, including 1) the primary program components were delivered mainly in the home or in a community setting; 2) the authors described program components as —extracurricular activities, such as social groups, athletics, arts, and academic tutoring; 3) the primary program components were delivered in elementary or high school; 4) the only comparison group was another prevention program; 5) the study assessed only program completers rather than using an intent-to-treat strategy; or 6) attrition differed causing the sample groups to be non-equivalent.

Information Sources and Search Strategy

The systematic search strategy followed the recommendations of Card (2012) and Littell and Maynard (2014) to identify relevant published and unpublished studies. The following databases were identified in consultation with a university librarian and searched February 28 through March 6, 2014: Social Work Abstracts (ProQuest); Social Services Abstracts (ProQuest); Sociological Abstracts (ProQuest); PsycINFO (EBSCO); ERIC (EBSCO); Academic Search Complete (EBSCO); PubMed (Medline); Cochrane Library. Eight other databases and websites were queried to identify unpublished, non-peer-reviewed reports: Dissertations and Theses (ProQuest); ERIC (EBSCO); Google Scholar; National Criminal Justice Reference Service Abstracts; National Institute for Health and Clinical Excellence; Office of Juvenile Justice and Delinquency Prevention; RAND Corporation; and Washington State Institute for Public Policy. Query terms were: ((adolescent* OR middle school OR teen*) AND (school*) AND (*drug OR alcohol OR substance) AND (prevention)). Additional studies were identified from citations in prior reviews and eligible studies through April 27, 2015.

Data Collection and Coding

Data were extracted using a coding protocol and form (Card, 2012; Littell & Maynard, 2014). The form included items on study citation, research design, sample characteristics, outcome measures, statistical results, and program characteristics. Program characteristics included dosage (number of sessions); booster sessions delivered in the next semester or grade level (yes/no); delivery setting (during regular school hours, after regular school hours, or both); and instructor type (i.e., teacher, clinician, police officer, and peer with adult). The clinician category included instructors described as persons with clinical, counseling, or substance abuse prevention training who worked as staff or a volunteer with a university or social service organization. The geographical setting (urban, rural, or mixed); percent male (program group); percent non-White (program group); and low SES, indicated as program participant qualified for free or reduced price lunch or from a household with median income below poverty level (yes/no) also were coded. Treatment and control sample sizes were recorded at baseline and at all follow-ups. Statistical information to compute effect sizes included treatment and control sample sizes at baseline and follow-up, number of cluster locations (if clustered design), ICC values, and outcome measure results. To test the risk of bias in individual studies, study characteristics included: research design (RCT, quasi-experimental); fidelity adherence (reporting how implementation followed program design, yes/no); follow-up time (months); control/comparison condition (no programming, treatment as usual with minimal instruction, or alternative health instruction); and studies that lacked clarity on how they addressed non-completers (yes/no, explained below). Some researchers have found that the program developer’s involvement in the evaluation as a researcher significantly increased the effect size (Petrosino & Soydan, 2005; Washington State Institute for Public Policy, 2015). Accordingly, studies were coded as developer-involved when the report named the developer as an author. Information not available from the reports was retrieved from other reports, websites for the program, or requested from authors. During the coding process, the coding team identified five studies that appeared to use an intent-to-treat design, but lacked clear details to confirm this. In these cases, the lead authors were contacted, three of whom confirmed this. For the studies whose authors did not respond, this uncertainty was treated as a variable to explore risk of bias.

A university faculty member and two graduate students coded the studies. First, training involved all researchers coding six studies. Afterwards, two graduate students coded remaining studies, and the lead author reviewed these for accuracy and consistency. The coding team met weekly and resolved discrepancies by group consensus in order to attain full agreement on all coding. The coding team entered the information from each form into an Excel database. The lead researcher reviewed all data entered into the database to ensure accuracy.

Synthesis of Results

Computing effect sizes

Cannabis use outcome measures included initiation (ever used cannabis or used in the past year); recent use (amount of cannabis used or any used in the past month or week); intent to use (likelihood of using during next week or month); and refusal skills (ability to resist offers to use). Program effects were measured by computing the standardized mean difference effect size (d), weighted by the pooled standard deviations of the treatment and comparison groups (Card, 2012; Lipsey & Wilson, 2001). Wherever possible, the sample sizes reported from the outcome statistics were used rather than the initial counts at baseline. Equations reported in Card (2012) and Lipsey and Wilson (2001) were used for computing effect sizes from a variety of continuous measures. The Cox transformation was used with dichotomous outcomes to approximate the standardized mean effect size (Sánchez-Meca, Marín-Martinez, & Chacón-Moscoso, 2003), computed using procedures discussed in Washington State Institute for Public Policy (2015). An effect size was used as reported in a study only if sufficient information was provided to confirm that the method was the same as that used in the present meta-analysis. Effect sizes from studies that reported no significant effects (without any other statistical information) were coded by assigning a value of 0.0 with a one tailed p-value of 0.5, as recommended by Rosenthal (1995). Negative effect sizes were specified by adding a negative sign to denote the treatment group had lower usage and intent to use outcomes than the control group. A positive effect size for refusal skill outcomes denotes treatment groups scoring higher on refusal skills than the control group.

Statistical analyses

Meta-analyses and moderator and risk of bias analyses were performed using Comprehensive Meta-Analysis (CMA) Version 2 (Borenstein, Hedges, Higgins, & Rothstein, 2005). Effect size calculations and descriptive analyses were performed using Microsoft Excel. Effect sizes from subgroups and multiple follow-up points were averaged into single effect sizes for each sample using fixed effects models with inverse variance weights to account for differences in subgroup sizes. Additional details about multiple studies contributing to effect sizes from the same samples are available online. To ensure statistical independence of effect sizes in analyses, the initiation measure was selected when studies reported both initiation and recent use. Individual effect sizes were combined in CMA using random effects models employing inverse variance weights with each effect size (Card, 2012; Lipsey & Wilson, 2001). Random effects models were used to account for possible variation other than sampling error among effect sizes as well as heterogeneity in programs and participants (Card, 2012; Raudenbush, 2009). Homogeneity was tested using the Q statistic and the I-squared (I2) statistic was computed to describe the proportion of variation across studies due to heterogeneity (Higgins & Thompson, 2002).

Treating sample sizes for clustered studies

Cluster or nested designs are commonly used in school-based studies, where individual students were the units of analysis but their assignment to treatment or comparison conditions were by school or district. This clustering has the effect of limiting the standard error or confidence interval in the estimated program effect because of similarities between students within schools or districts. To account for this, an effective sample size was estimated for each clustered sample to avoid inflating the effect size based on clustering (McKenzie, Ryan, & Di Tanna, 2014). Many studies reported an intracluster correlation coefficient (ICC), which was used to estimate the effective sample size. However, some studies did not report an ICC. For these, the average ICC values for each outcome were estimated from a sample of ICC scores that the authors compiled from the reported ICCs in the included studies, as well as those reported in separate analyses (Murray & Hannan, 1990; Schheier, Griffin, Doyle, & Botvin, 2002).

Moderator analysis

Moderator analyses were conducted on the pooled effect size for any cannabis use with the program and study characteristic variables described above. Mixed effects models were used on categorical variables and meta-regression analyses with method of moments models were used on continuous moderators (Borenstein, et al., 2005). The between-group heterogeneity (QBetween) was estimated to assess the reliability of moderation on effect sizes (Card, 2012). The Tau-squared (τ2) statistic was also computed to show the amount of random effects distribution variance. For categorical variables with more than two categories, separate analyses were conducted on each category to assess the degree of variation between each type. Bonferroni-adjusted alpha criterion was used to control for Type 1 error when interpreting statistical significance of the Q statistic (Holm, 1979). Risk of bias in individual studies was assessed with bi-variate moderator analysis using random effects models on four study characteristics: study design (whether RCT or quasi-experimental); fidelity adherence (yes/no); follow-up length (number of months); comparison group condition; and where intent-to-treat design could not be confirmed.


The search of eight electronic databases yielded 8,303 records for the broader review of substance use prevention programs (Figure 1). A total of 7,056 records were left for screening after removing duplicates and adding 56 records identified from other sources. After review of titles and abstracts, 6,412 records were excluded. Another 424 records were excluded after screening full-text articles, and two articles could not be obtained. The remaining 218 studies were coded. After coding, 47 records were identified for study inclusion, of which 30 studies yielding 23 independent samples had cannabis outcomes and were included in the present meta-analysis. Of the 188 excluded studies, four were not in English or from North America, 66 were not impact evaluations, 57 did not include predominantly middle-school youth, 29 had no cannabis or other drug or alcohol use outcomes, 20 had studies conducted before 1998, and 12 had methodological problems (such as differential attrition or insufficient statistical data).

Figure 1
Flow Diagram for Studies Included in the Meta-Analysis

Study Characteristics

Study design and participants

Table 1 presents descriptive information on all studies included in the meta-analysis. All but one study was from the US, with the other from Canada (DeWit et al., 2000). One study (D’Amico & Edelen, 2007) did not use random assignment of participants. The authors used matched samples based on demographic variables and baseline substance use measures to generate a weighted sample of students from a control school that was comparable with the sample of participants in the program school (D’Amico & Edelen, 2007). The follow-up times used in studies ranged from immediate post-test in three studies to 72 months in one study, M=16 months (SD=17). Two studies reported only intent and refusal skills outcomes (Clark, Ringwalt, Hanley, & Shamblen, 2010; Longshore, Ellickson, McCaffrey, & Clair, 2007). All but two studies included clear descriptions of how the program was implemented with fidelity to the design. All but two studies used a confirmed intent-to-treat design. Demographic information about participants is reported in Table 1 for the sample sizes used at the time of program delivery. Sample sizes varied from 42 to 5,756 (M =1,613, SD=1,672). The gender distribution was relatively equal (M=49%, SD=5%). The racial composition varied in studies with the percentage of participants identifying as non-White ranging from 3% to 99% (M =50%; SD=31%). Additionally, 46% of studies reported some percentage of the participants as to be from low socio-economic status backgrounds.

Table 1
Description of Study Characteristics and Effect Sizes Included in the Meta-Analysis

Program characteristics

Table 2 presents the program characteristics from each study. Nine studies were conducted with students in predominantly urban locations, four were in rural locations, and ten were in mixed settings. A variety of curricula were represented, with five studies implementing Project ALERT, three implementing Life Skills Training, three implementing All Stars, and the remaining twelve implementing different program curricula. The delivery setting was a combination of during and after school activities in four studies, after school in two studies, and during school in the remaining 17 studies. For instructors, a clinician delivered the program in nine studies, three studies had either a police officer, a trained adult with a peer leader, or a volunteer from the community led the program, and teachers delivered the curricula in the remaining ten studies. The number of sessions ranged from 3 to 40 (M=17, SD=9). Booster sessions given six months to one year or later occurred in 10 studies. Participants were compared to students who received no programming in 10 studies, health information in two studies, or treatment as usual in the remaining 11 studies.

Table 2
Description of Program Characteristics from Studies Included in the Meta-Analysis

Risk of Bias within Studies

Categorical moderator analyses assessed clarity of intent-to-treat design, comparison condition, evidence of fidelity, and developer involvement in evaluation. Results indicated that the effect sizes differed, though these were not significantly associated with effect size moderation as indicated by the between-group heterogeneity values. A meta-regression analysis of the number of follow-up months on effect size was also not significant. The results suggest no evidence of bias due to study quality or characteristics.

Individual Study Results and Synthesis

The effect sizes for individual studies are summarized in the forest plots shown in Figures 2 through through44 (for cannabis use, intent to use, and resistance skills, respectively), showing each study in the meta-analysis with follow-up length in months, standardized mean differences with confidence intervals, and pooled result calculated with random effects weights. Two studies had medium effect sizes, and the rest had small effects. Two studies had no effect and four studies had negative effects favoring the control groups. The overall program effect on adolescent cannabis use was small and significant, favoring the programs (Figure 2). The random-effects weighted mean effect size for any cannabis use (k=21) was d =−0.07, 95% CI [−0.12; −0.02]; p<0.01). The homogeneity test result suggested evidence of marginal heterogeneity (τ2=0.004, Q=29.78, df=20, p=0.07, I2=32.83%). For the one study that used a quasi-experimental design rather than random assignment (D’Amico & Edelen, 2007), the effect size for cannabis use (d =−0.36, 95% CI [−0.73, 0.01], p=0.07) exceeded the pooled effect size from studies using random assignment (d =−0.07, 95% CI [−0.11, −0.02], p<.01). The pooled results for intention to use cannabis (k=3) was small and did not achieve statistical significance (d =−0.046, 95% CI [−0.10, 0.01]; p=0.09) (Figure 3). The pooled result for refusal skill outcomes (k=3) was also not statistically significant (d =0.01; 95% CI [−0.02, 0.05]; p=0.49) (Figure 4). The homogeneity tests were null and the I2 statistic was 0% for the last two outcomes, indicating no study heterogeneity.

Figure 2
Forest Plot Showing Effect Size (Standardized Mean Differences) and 95% Confidence Interval with Random Effects Mean for Cannabis Use Outcomes
Figure 3
Forest Plot Showing Effect Size (Standardized Mean Differences) and 95% Confidence Interval with Random Effects Mean for Intention to Use Cannabis Outcomes
Figure 4
Forest Plot Showing Effect Size (Standardized Mean Differences) and 95% Confidence Interval with Random Effects Mean for Refusal Skills to Resist Cannabis Use Outcomes

The possibility of publication bias was assessed first by visually inspecting funnel plots of standard error by standard difference in means for the three outcome measures. The funnel plot for any cannabis use outcome was symmetrical, suggesting no indication of study bias. A regression test of association between study effect sizes and sample sizes was not statically significant (b=−0.45; SE=0.47, p=0.35). However, a trim and fill analysis (Card, 2012; Duvall, 2005) yielded one trimmed study to the right of the mean, but the effect was trivial (d =−0.06, 95% CI [−0.11, −0.02]). Publication bias test results for intention to use and refusal skills outcomes were omitted with too few cases for assessment.

Results of the moderator analysis indicated significant differences only for the program instructor type (QBetween=11.40, df=4, p=0.02). The within-group weighted mean effect sizes for the program instructor categories for teachers (d =−0.08, 95% CI [−0.15, −0.01], p=0.02) and clinicians (d =−0.10, 95% CI [−0.20, −0.01], p=0.04) significantly favored reduced cannabis use. The clinician category included the one study that used a quasi-experimental design rather than random assignment (D’Amico & Edelen, 2007). When this single study was removed, the effect size became smaller and non-significant for clinicians (d =−0.08, p=0.10), while the effect for teachers remained significant (d =−0.08, p=0.02). The one program delivered by a youth peer with an adult significantly favored increased cannabis use (d =0.62, 95% CI [0.18, 1.06], p=0.01) with significant differences between categories (QBetween=9.53, df=1, p<0.01). The instructor categories, however, explain a trivial proportion of variation with I2=13.73% in the clinician category and I2=0.0% in the other categories. All the other moderator variables were not statically significant.

Discussion and Conclusions

This meta-analysis is the first to examine the effectiveness of interactive North American school-based substance use prevention programs delivered exclusively during middle school on cannabis outcomes. The overall effect size suggests that interactive middle school-based programs potentially delay or prevent cannabis use among North American adolescents. This finding is consistent with findings from earlier reviews (Norberg et al., 2013; Porath-Waller et al., 2010). As such, interactive cannabis use prevention programs offered during middle school present an effective strategy to consider, particularly as rates of cannabis use have been found to increase with age. The small overall effect size, however, may signal the need for future studies to further explore the linkage between program theory, program content, and program exposure to help identify what may be contributing to these limited program effects. While the effect size is small, however, it is still important to consider the practical significance of even a small effect size (McCartney & Rosenthal, 2000), especially if these more effective substance use programs were implemented on a large-scale

The type of program instructor appears to moderate program effectiveness in reducing cannabis use. Delivery of these programs by teachers was found significantly more effective and there was a large negative effect for delivery by youth peers with adults. Both of these findings contradict the findings of prior reviews. However, the finding related to youth peers should not be generalized since it was based on a single study with an effect size in the peer category. This signals the need for studies that further explore how instructor type may influence differences in program effectiveness, especially given considerations around scalability and cost-effectiveness for schools who implement prevention programming. No other moderating factors were found significant. One explanation is that the interactive programs included in this study had very similar theoretical and delivery design characteristics. The marginal differences in effects might be explained by other factors that were either not coded (e.g., single vs. multiple modality with family or community-based components) or external to the programs (e.g., exposure to other drug prevention or health information).

Some study limitations should be considered. Non-significant moderator analysis results may be due to the small number of studies, which reduced statistical power to detect effects (Card, 2012). There are many ways to assess research quality, and the present meta-analysis did so by testing individual research components for risk of bias assessment rather than by scoring items in a scale. In addition, all of the studies utilized self-report measures of outcomes. Self-report accuracy could have differed across conditions in these studies and thus influenced the results of those studies, and ultimately, the inferences made in this meta-analysis. This study also did not code programs based on whether they were universal, selective, or indicated. Therefore, differences based on this typology could not be explored. Finally, the present review focused on programs implemented within the school setting.

As Norberg et al. (2013) show, effects of school-based cannabis prevention programs may differ by modality and external components, such as family and community-based features that potentially enhance prevention. As major cultural shifts continue related to acceptance of marijuana use, adolescent substance abuse prevention still remains a critical priority in the US and Canada. School-based programs can be a cost-effective component of wider prevention strategies (Lemon, Pennucci, Hanley, & Aos, 2014). While more rigorous studies continue to be needed that evaluate the impact of these programs, the present meta-analysis provides further support for the use of interactive programs in middle schools to prevent cannabis use by adolescents in North America.

Supplementary Material



This project was supported by contract number A201611015A with the South Carolina Department of Health and Human Services (SCDHHS). This work was also supported by the National Institute on Drug Abuse (not cited for blind copy). Points of view in this document are those of the authors and do not necessarily represent the official position or policies of SCDHHS or the National Institutes of Health.


Compliance with Ethical Standards

Disclosure of potential conflicts of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors. Informed consent

Not applicable -- for this type of study formal consent is not required.


Note: References with an asterisk (*) indicate studies included in the meta-analysis.

* Apsler R, Formica S, Fraster B, McMahan R. Promoting positive adolescent development for at-risk students with a student assistance program. The Journal of Primary Prevention. 2006;27(6):533–554. [PubMed]
* Aseltine RJ, Dupre M, Lamlein P. Mentoring as a drug prevention strategy: an Evaluation of Across Ages. Adolescent and Family Health. 2000;1(1):11–20.
* Bacon TP, Hall BW, Ferron JM. Technical Report: One Year Study of the Effects of the Too Good for Drugs Prevention on Middle School Students. 2013 Unpublished manuscript.
Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Comprehensive Meta-Analysis. Englewood, NJ: Biostat; 2005.
* Botvin GJ, Griffin KW, Diaz T, Ifill-Williams M. Drug abuse prevention among minority adolescents: Posttest and one-year follow-up of a school-based preventive intervention. Prevention Science. 2001;2(1):1–13. [PubMed]
Card NA. Applied Meta-Analysis for Social Science Research. New York: The Guildford Press; 2012.
Chen CY, Storr CL, Anthony JC. Early-onset drug use and risk for drug dependence problems. Addictive Behaviors. 2009;34(3):319–322. [PMC free article] [PubMed]
* Clark HK, Ringwalt CL, Hanley S, Shamblen SR. Project ALERT’s effects on adolescents’ prodrug beliefs: A replication and extension study. Health Education & Behavior. 2010;37(3):357–376. [PubMed]
Cox RG, Zhang L, Johnson WD, Bender DR. Academic performance and substance use: findings from a state survey of public high school students. Journal of School Health. 2007;77(3):109–115. [PubMed]
Cuijpers P. Effective ingredients of school-based drug prevention programs: A systematic review. Addictive Behaviors. 2002;27(6):1009–1023. [PubMed]
* D’Amico EJ, Edelen MO. Pilot test of Project CHOICE: A voluntary afterschool intervention for middle school youth. Psychology of Addictive Behaviors. 2007;21(4):592–598. [PubMed]
* DeWit DJ, Steep B, Silverman G, Stevens-Lavigne A, Ellis K, Smythe C, … Wood E. Evaluating an in-school drug prevention program for at-risk youth. Alberta Journal of Educational Research. 2000;46(2):117–133.
Duvall S. The trim and fill method. In: Rothstein HR, Sutton AJ, Borenstein M, editors. Publication bias in meta-analysis: Prevention, assessment, and adjustments. Hoboken, NJ: Wiley; 2005. pp. 127–144.
* Eisen M. Intermediate outcomes from a life skills education program with a media literacy component. In Mass media and drug prevention. Claremont Symposium on Applied Social Psychology; 2001. Nov, pp. 187–214.
* Eisen M, Zellman GL, Murray DM. Evaluating the Lions–Quest —Skills for Adolescence drug education program: Second-year behavior outcomes. Addictive Behaviors. 2003;28(5):883–897. [PubMed]
* Eisen M, Zellman GL, Massett HA, Murray DM. Evaluating the Lions-Quest —Skills for Adolescence drug education program: First-year behavior outcomes. Addictive Behaviors. 2002;27:619–632. [PubMed]
* Ellickson PL, McCaffrey DF, Ghosh-Dastidar B, Longshore DL. New inroads in preventing adolescent drug use: Results from a large-scale trial of Project ALERT in middle schools. American Journal of Public Health. 2003;93(11):1830–1836. [PubMed]
Faggiano F, Vigna-Taglianti FD, Versino E, Zambon A, Borraccino A, Lemma P. School-based prevention for illicit drugs use: A systematic review. Preventative Medicine. 2008;46(5):385–396. [PubMed]
Finn KV. Marijuana use at school and achievement-linked behaviors. High School Journal. 2012;95(3):3–13.
* Fosco GM, Frank JL, Stormshak EA, Dishion TJ. Opening the —Black Box : Family Check-Up intervention effects on self-regulation that prevents growth in problem behavior and substance use. Journal of School Psychology. 2013;51(4):455–468. [PMC free article] [PubMed]
* Ghosh-Dastidar B, Longshore DL, Ellickson PL, McCaffrey DF. Modifying pro-drug risk factors in adolescents: Results from Project ALERT. Health Education & Behavior. 2004;31(3):318–334. [PubMed]
* Gottfredson DC, Cross A, Wilson D, Rorie M, Connell N. An experimental evaluation of the All Stars prevention curriculum in a community after school setting. Prevention Science. 2010;11(2):142–154. [PubMed]
Gottfredson DC, Wilson DB. Characteristics of effective school-based substance abuse prevention. Prevention Science. 2003;4(1):27–38. [PubMed]
Government of Canada. Canadian Tobacco, Alcohol and Drugs Survey (CTADS) 2015. Table 8. Illicit drug use (past 12 month and lifetime) by sex and age group 2013.
* Griffin KW, Botvin GJ, Nichols TR, Doyle MM. Effectiveness of a Universal drug abuse prevention approach for youth at high risk for substance use initiation. Preventive Medicine. 2003;36(1):1–7. [PubMed]
* Griffin JP, Jr, Holliday RC, Frazier E, Braithwaite RL. The BRAVE (Building Resiliency and Vocational Excellence) Program: evaluation findings for a career-oriented substance abuse and violence preventive intervention. Journal of Health Care for the Poor and Underserved. 2009;20(3):798–816. [PubMed]
Harvey M, Sellman J, Porter R, Frampton C. The relationship between non-acute adolescent cannabis use and cognition. Drug and Alcohol Review. 2007;26(3):309–319. [PubMed]
Hazen E, Schlozman S, Beresin E. Adolescent psychological development: a review. Pediatric in Review. 2008;29(5):161–8. [PubMed]
* Hecht ML, Graham JW, Elek E. The drug resistance strategies intervention: Program effects on substance use. Health Communication. 2006;20(3):267–276. [PubMed]
* Hecht ML, Marsiglia FF, Elek E, Wagstaff DA, Kulis S, Dustman P, Miller-Day M. Culturally grounded substance use prevention: an evaluation of the Keepin’ it REAL curriculum. Prevention Science. 2003;4(4):233–248. [PubMed]
Higgins JPT, Thompson SG. Quantifying heterogeneity in a meta-analysis. Statistics In Medicine. 2002;21(11):1539–1558. [PubMed]
Holm S. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics. 1979;6(2):65–70.
Johnston LD, O’Malley PM, Miech RA, Bachman JG, Schulenberg JE. Monitoring the Future National Survey Results on Drug Use: 1975–2014: Overview, Key Findings on Adolescent Drug Use. Institute for Social Research, the University of Michigan; Ann Arbor: 2015.
Lemon M, Pennucci A, Hanley S, Aos S. Preventing and treating youth marijuana use: An updated review of the evidence. Olympia, WA: Washington Institute for Public Policy; 2014. (Doc. No. 14-10-3201)
Lipsey MW, Wilson DB. Practical Meta-Analysis. Thousand Oaks, CA: Sage; 2001.
Littell JH, Maynard BR. Systematic review methods: the science of research synthesis. Preconference Workshop presented at Society for Social Work Research Conference; San Antonio, TX. January 16, 2014.2014. Jan,
* Longshore D, Ellickson PL, McCaffrey DF, Clair PAS. School-based drug prevention among at-risk adolescents: Effects of ALERT plus. Health Education & Behavior. 2007;34(4):651–668. [PubMed]
Lopez-Quintero C, Pérez de los Cobos J, Hasin DS, Okuda M, Wang S, Grant BF, Blanco C. Probability and predictors of transition from first use to dependence on nicotine, alcohol, cannabis, and cocaine: Results of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) Drug and Alcohol Dependence. 2011;115(1):120–130. [PMC free article] [PubMed]
McKenzie J, Ryan R, Di Tanna GL. Cluster randomised controlled trials. Melbourne: Cochrane Consumers and Communication Review Group; Mar, 2014. Retrieved from
McCartney K, Rosenthal R. Effect size, practical importance, and social policy for children. Child development. 2000;71(1):173–180. [PubMed]
* McNeal RB, Hansen WB, Harrington NG, Giles SM. How All Stars works: An examination of program effects on mediating variables. Health Education & Behavior. 2004;31(2):165–178. [PubMed]
Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of Internal Medicine. 2009;151(4):264–269. [PubMed]
Murray DM, Hannan PJ. Planning for the appropriate analysis in school-based drug-use prevention studies. Journal of Consulting and Clinical Psychology. 1990;58(4):458–458. [PubMed]
Norberg MM, Kezelman S, Lim-Howe N. Primary prevention of cannabis use: a systematic review of randomized control trials. PLoS ONE. 2013;8(1):e53187. [PMC free article] [PubMed]
* Parent AP. Doctoral dissertation. Rutgers University-Graduate School of Applied and Professional Psychology; NJ: 2010. Effects of a comprehensive substance use prevention program with urban Adolescents.
Patton GC, Coffey C, Carlin JB, Degenhardt L, Lynskey M, Hall W. Cannabis use and mental health in young people: cohort study. British Medical Journal. 2002;325(7374):1195–1198. [PMC free article] [PubMed]
Petrosino A, Soydan H. The impact of program developers as evaluators on criminal recidivism: Results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology. 2005;1(4):435–450.
Porath-Waller AJ, Beasley E, Beirness DJ. A meta-analytic review of school- based prevention for cannabis use. Health Education & Behavior. 2010;37(5):709–723. [PubMed]
Raudenbush SW. Analyzing effect sizes: random-effects models. In: Copper H, Hedges LV, Valentine JC, editors. The handbook of research synthesis and meta-analysis. 2. New York: Russel Sage Foundation; 2009. pp. 295–315.
* Ringwalt C, Clark HK, Hanley S, Shamblen SR, Flewelling RL. Project ALERT: A cluster randomized trial. Archives of Pediatric Adolescent Medicine. 2009;163(7):625–632. [PubMed]
* Ringwalt CL, Clark HK, Hanley S, Shamblen SR, Flewelling RL. The effects of Project ALERT one year past curriculum completion. Prevention Science. 2010;11(2):172–184. [PubMed]
Rosenthal R. Writing meta-analytic reviews. Psychological Bulletin. 1995;118(2):183–192.
Sánchez-Meca J, Marín-Martinez F, Chacón-Moscoso S. Effect-size indices for dichotomized outcomes in meta-analysis. Psychological Methods. 2003;8(4):448–467. [PubMed]
Scammacca N, Roberts G, Stuebing KK. Meta-analysis with complex research designs: dealing with dependence from multiple measures and multiple group comparisons. Review of Education Research. 2014;84(3):328–364. [PMC free article] [PubMed]
Scheier LM, Griffin KW, Doyle MM, Botvin GJ. Estimates of intragroup dependence for drug use and skill measures in school-based drug abuse prevention trials: An empirical study of three independent samples. Health Education & Behavior. 2002;29(1):85–103. [PubMed]
* Slater MD, Kelly KJ, Edwards RW, Thurman PJ, Plested BA, Keefe TJ, … Henry KL. Combining in-school and community-based media efforts: reducing marijuana and alcohol uptake among younger adolescents. Health Education Research. 2006;21(1):157–167. [PubMed]
* Sloboda Z, Stephens RC, Stephens PC, Grey SF, Teasdale B, Hawthorne RD, … Marquette JF. The Adolescent Substance Abuse Prevention Study: A randomized field trial of a universal substance abuse prevention program. Drug and Alcohol Dependence. 2009;102(1):1–10. [PubMed]
* Smith EA, Swisher JD, Vicary JR, Bechtel LJ, Minner D, Henry KL, Palmer R. Evaluation of Life Skills Training and Infused-Life Skills Training in a rural setting: outcomes at two years. Journal of Alcohol & Drug Education. 2004;48(1):51–70.
* Spoth RL, Randall GK, Trudeau L, Shin C, Redmond C. Substance use outcomes 5½ years past baseline for partnership-based, family-school preventive interventions. Drug and Alcohol Dependence. 2008;96(1):57–68. [PMC free article] [PubMed]
* St Pierre TL, Osgood DW, Mincemoyer CC, Kaltreider DL, Kauh TJ. Results of an independent evaluation of project ALERT delivered in schools by cooperative extension. Prevention Science. 2005;6(4):305–317. [PubMed]
Substance Abuse and Mental Health Services Administration. Results from the 2013 National Survey on Drug Use and Health: Summary of National Findings, NSDUH Series H-48. Rockville, MD: Author; 2014. HHS Publication No. (SMA) 14-4863.
Tobler NS, Roona MR, Ochshorn P, Marshall DG, Streke AV, Stackpole KM. School-based adolescent drug prevention programs: 1998 meta-analysis. Journal Primary Prevention. 2000;20(4):275–336.
Tobler NS, Lessard T, Marshall D, Ochshorn P, Roona M. Effectiveness of school-based drug prevention programs for marijuana use. School Psychology International. 1999;20(1):105–137.
* Turner-Musa JO, Rhodes WA, Harper PH, Quinton SL. Hip-Hop to prevent substance use and HIV among African-American youth: a preliminary investigation. Journal of Drug Education. 2008;38(4):351–365. [PubMed]
* Vicary JR, Henry KL, Bechtel LJ, Swisher JD, Smith EA, Wylie R, Hopkins AM. Life skills training effects for high and low risk rural junior high school females. Journal of Primary Prevention. 2004;25(4):399–416.
* Vicary JR, Smith EA, Swisher JD, Hopkins AM, Elek E, Bechtel LJ, Henry KL. Results of a 3-year study of two methods of delivery of Life Skills Training. Health Education & Behavior. 2006;33(3):325–339. [PubMed]
Washington State Institute for Public Policy. Benefit-cost technical documentation. Olympia, WA: Author; Dec, 2015. Retrieved from
White D, Pitts M. Educating young people about drugs: a systematic review. Addiction. 1998;93(10):1475–87. [PubMed]
Wright MJ. Legalizing marijuana for medical purposes will increase risk of long-term, deleterious consequences for adolescents. Drug and Alcohol Dependence. 2015;149:298–303. [PubMed]