PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Prev Sci. Author manuscript; available in PMC 2011 June 21.
Published in final edited form as:
Published online 2006 December 19. doi:  10.1007/s11121-006-0056-z
PMCID: PMC3119706
NIHMSID: NIHMS303006

Fidelity of Implementation in Project Towards No Drug Abuse (TND): A Comparison of Classroom Teachers and Program Specialists

Abstract

This paper presents the results of an effectiveness trial of Project Towards No Drug Abuse [TND], in which we compared program delivery by regular classroom teachers and program specialists within the same high schools. Within 18 schools that were randomly assigned to the program or control conditions, health classrooms were assigned to program delivery by teachers or (outside) specialists. Classroom sessions were observed by pairs of observers to assess three domains of implementation fidelity: adherence, classroom process, and perceived student acceptance of the program. Pre- and immediate posttest survey data were collected from 2331 students. Of the four composite indexes of implementation fidelity that were examined, only one (quality of delivery) showed a difference between specialists and teachers, with marginally higher ratings of specialists (p < .10). Both teachers and program specialists achieved effects on three of the five immediate outcome measures, including program-specific knowledge, addiction concern, and social self-control. Students’ posttest ratings of the program overall and the quality of program delivery failed to reveal differences between the teacher- and specialist-led classrooms. These results suggest that motivated, trained classroom teachers can implement evidence-based prevention programs with fidelity and achieve immediate effects.

Keywords: Substance abuse, Fidelity, Program providers, Implementation

Introduction

There is now substantial empirical evidence indicating that a number of school-based programs are effective in preventing or reducing substance abuse among adolescents (Tobler & Stratton, 1997; Tobler, Roona, Ochshorn, Marshall, Streke, & Stackpole, 2000). The body of evidence for substance abuse prevention has been generated primarily through efficacy trials, which employ randomized controlled designs and other rigorous methods to determine whether specific approaches reduce substance use and abuse (Clarke, 1995; Flay, 1986; Green & Glasgow, 2006; Glasgow, Lichtenstein, & Marcus, 2003). Efficacy research, however, often has limited application to “real-world” settings (Glasgow et al., 2003). To determine how efficacious programs will work in typical school contexts, effectiveness trials must be conducted (Flay, 1986; Greenwald & Cullen, 1985).

Two research questions that are pertinent to effectiveness and dissemination studies are: (1) Who should deliver the prevention program? and (2) How much and what type of training is required for providers to implement the program successfully? In efficacy trials, researchers carefully select, train, and supervise program implementers in order to ensure standardization and high quality of program delivery (e.g., Hansen, Graham, Wolkenstein, & Rohrbach, 1991). Typically, the implementers are health education specialists, mental health specialists (e.g., counselors or clinicians), or certified teachers who are employed solely to deliver the program being evaluated rather than having multiple competing responsibilities (Green & Glasgow, 2006). In contrast, generally in effectiveness trials programs are delivered by classroom teachers or other appropriate school staff (e.g., nurses or counselors) who have many other responsibilities, and who volunteer or are selected by their school administrators to participate in the study (e.g., Basen-Enquist, O'Hara-Tompkins, Lovato, Lewis, Parcel, & Gingiss, 1994; Botvin, Baker, Dusenbury, Tortu, & Botvin, 1990; Rohrbach, Graham, & Hansen, 1993).

Another difference between efficacy trials and typical school contexts relates to the training provided to implementers prior to, and during program delivery. Typically, in efficacy trials the training approach is more extensive, including instruction on the theoretical rationale for the program, demonstration and practice of the skills required to teach the program, and ongoing coaching or technical assistance (Joyce & Showers, 1980; Tortu & Botvin, 1989). Sometimes, it also involves observation of program delivery by experienced specialists prior to implementation (e.g., Hansen et al., 1991; Rohrbach, English, Hansen, & Johnson, 2000). When prevention programs are transported to typical school settings, one barrier commonly encountered is that teachers have little time available for training (Rohrbach, D'Onofrio, Backer, & Montgomery, 1996). Usually, teacher training is limited to one- or two-day workshops that present an overview of the program theory, content, and teaching methods, but provide few or no opportunities for practice (e.g., Basen-Enquist et al., 1994; Perry, Murray, & Griffin, 1990; Rohrbach, Graham, & Hansen, 1993; Tortu & Botvin, 1989).

A critical issue in effectiveness research is the extent to which regular classroom teachers will implement evidence-based prevention programs with fidelity to the program design. The importance of program fidelity has been demonstrated (Dusenbury, Brannigan, Falco, & Hansen, 2003); low fidelity results in smaller or no program effects on behavioral outcomes (Botvin et al., 1990; Botvin, Baker, James-Ortiz, Botvin, & Kerner, 1992) and program mediators (Resnicow et al., 1998; Rohrbach et al., 1993). However, little is known about the most effective models for improving teachers’ implementation fidelity. The few studies that have addressed this question have suggested that workshops are no more effective than other teacher training modalities, such as videos, in increasing implementation fidelity (Botvin et al., 1990; Basen-Enquist et al., 1994).

As public polices such as the federal Safe and Drug Free Schools and Community Act (U.S. Department of Education, 2005) move us toward broad dissemination of evidence-based substance abuse prevention programs, there is a need for more research on all aspects of program implementation (Dusenbury et al., 2003; Domitrovich & Greenberg, 2000; Rohrbach, Grana, Sussman, & Valente, 2006; Schoenwald & Hoagwood, 2001). Specifically, studies that examine whether proven prevention programs are effective when implemented in “real world” settings by providers who differ from those in the efficacy trials are important for at least two reasons. First, there is a paucity of studies on this topic. We found only two studies that systematically manipulated the type of implementer of a school-based substance abuse prevention program (Cameron et al., 1999; McNeal, Hansen, Harrington, & Giles, 2004). The study by McNeal and colleagues showed larger effects for classroom teachers, relative to outside program specialists, on substance use outcomes and mediating variables (McNeal et al., 2004). Cameron and colleagues found that teachers’ and school nurses’ effects on smoking rates varied by school risk level, with no difference between provider types in high-risk schools and larger effects for nurses in low-risk schools (Cameron et al., 1999). Meta-analysis is another method that has been used to compare the relative effectiveness of different types of prevention program providers (e.g., teachers, peers, police, and mental health clinicians). Two studies have shown no differences by provider type in program effect size (Gottfredson & Wilson, 2003; Tobler & Stratton, 1997). However, the utility of meta-analytic comparisons is limited by confounding factors such as interactions between program modality and type of provider, and combinations of provider types in the same study (Gottfredson & Wilson, 2003). A second reason to conduct research on prevention program providers is that the findings of such studies may increase our understanding of why prevention programs succeed or fail when implemented on a large scale (e.g., St. Pierre, Osgood, Mincemoyer, Kaltreider, & Kauh, 2005).

The present paper describes the first effectiveness trial of Project Towards No Drug Abuse [TND], a high school-based program that has been shown to be effective in reducing adolescent substance use (Dent, Sussman, McCuller, & Stacy, 2001; Sun, Skara, Sun, Dent, & Sussman, 2006; Sussman, Dent, & Stacy, 2002). The aim of the study was to compare program delivery within the same schools by regular classroom teachers and (outside) program specialists.

Methods

School selection and experimental design

Nine school districts from two counties in southern California participated in this study. Eligible districts contained at least one regular high school (RHS) and one continuation high school (CHS)1, each with an enrollment between 50 and 2000 students. We selected one RHS and one CHS from each district, yielding a total sample of 18 schools. Using a randomized blocking procedure, these schools were assigned to one of three conditions: cognitive-perception-information-only curriculum, combined cognitive-perception-information + behavioral skills curriculum (“combined”), or standard care (control), which resulted in a sample of six schools per condition. Schools were blocked by estimates of student drug use prevalence, ethnic composition, total enrollment, standardized achievement test scores, and school type (RHS or CHS). Nine RHS-CHS pairs were matched into three sets of three using a linear composite of factor scores (Graham, Flay, Johnson, Hansen, Grossman, & Sobel, 1984), and randomly assigned within matched sets to the three conditions.

Project staff collaborated with the administrator of each program school to select a health teacher who was willing to participate in training and implement the program, and a second teacher in whose classroom the program would be implemented by an outside program specialist. In each program school, eight health classes were randomly selected to receive the program (four for delivery by the health teacher and four by the program specialist) and complete the surveys. In each control school, four health classes were randomly selected for participation in the surveys only. Students in all experimental conditions were given a questionnaire assessment at pretest and immediately after the program implementation period (posttest).

Intervention

Project TND is a 12-session high school-based substance abuse prevention program that incorporates motivation, skills and decision-making components (Sussman et al., 2004). The present study examined the efficacy of a new curriculum (cognitive-perception-information-only) relative to the complete (combined) curriculum that had been evaluated in the third Project TND experimental trial (Sussman et al., 2002). For a detailed description of the differences between the two curricula, see Skara, Rohrbach, Sun, and Sussman, 2005.

Training of program specialists involved several components. First, they observed delivery of each Project TND session by an experienced specialist in a typical high school classroom setting. Second, the specialist trainees practiced delivery of the program during pilot testing. Third, both the specialists and the classroom teachers recruited from participating schools attended a one and one-half day training session that reviewed the theoretical underpinnings and evidence base of the curriculum, provided detailed instruction on each lesson followed by practice with feedback, and included an overview of classroom control techniques. The classroom teachers participated in the workshop training session only.

At each program school, delivery of the curricula (both versions) took place over a four-week period. Dates of delivery (October through June) were balanced across conditions.

Each regular classroom teacher received $1000 compensation for his/her participation in the training and assistance in collecting parental consent forms.

Subjects

Implementers

Program implementers included 13 regular classroom teachers and five program specialists. The majority of implementers were female (72%), and 56% were white, 17 % were Hispanic, 17% were African American, and 10% were Asian. Their mean age was 44 years (SD = 10). Slightly more than one-fourth (28%) had a Masters degree. On average, the implementers had five years (SD = 5) of prevention program implementation experience.

Students

All students in the randomly selected classrooms were given parental consent forms to take home for signature, indicating parental approval or refusal of student participation in the study surveys. Subjects also assented to their involvement. The homes of those students who failed to return a consent form were called by project staff to request verbal parental consent for survey participation.

In total, 3908 high school students were enrolled in the selected classes. Of these, 2735 students (70% of the enrollment roster) were recruited to participate in the study. Reasons for non-enrollment in the study included chronic absenteeism, either the parent or the student declined participation, or the student was absent on testing days (80%, 5%, and 15% of those not enrolled in the study, respectively).

Of the students who completed pretest surveys, 2331 (85.2% of those recruited) also completed immediate post-program surveys. The sample of 2331 students constitutes our analysis sample. Student subjects ranged in age from 13 to 19 years, with a mean age of 15 (SD = 1). The sample was 52% male; 18% white, 62% Hispanic, 8% Asian, 8% African American, and 3% other ethnicity. One-quarter of the students were enrolled in continuation high schools.

Data collection procedures and measures

Fidelity of implementation: Classroom observation

Our approach to assessing fidelity of implementation emphasized program process, or the way in which the program was delivered (Mowbray, Holter, Teague, & Bybee, 2003). Our goal was to observe each program implementer delivering the same two highly interactive curriculum sessions to two separate classes in each school. In total, 117 classroom sessions were observed, including 90 sessions at which two observers were present and 27 sessions at which one observer was present. Analyses of inter-rater reliability are based on the 90 sessions that had paired observers. All other analyses of the fidelity data are based on the total number of observations (n = 207).

The 12 observers were graduate students and project data collectors who had participated in two training sessions on the use of the observation instrument. The first training involved review of the measures, practice in the use of the items after watching selected videotaped segments of in-school prevention program delivery, and discussion of the practice ratings until at least 90% inter-rater agreement was obtained. To prevent observer “drift,” we conducted a second training that involved additional viewing of videotaped program delivery, practice ratings, and feedback.

Fidelity measures

The observation instrument assessed three domains of implementation fidelity that have been identified in the literature: adherence, classroom process (i.e., quality of delivery), and perceived student acceptance of the curriculum (Dane & Schneider, 1998; Dusenbury et al., 2003). The specific fidelity items were adapted from previous studies (Dent, Sussman, Hennesy, Craig, Moss, & Stacy, 1998; Hansen et al, 1991; Rohrbach et al., 1993; Sussman, Dent, Burton, Stacy, & Flay, 1995; Sussman et al., 1997). An important feature of the observation instrument is that it included rating scales that specified behaviorally anchored criteria for the end- and mid-points on the scales (Mowbray et al., 2003).

The fidelity items were combined to create four indices. For each index, the items were standardized (mean = 0; std = 1) and averaged. The adherence index assessed whether the lesson was shortened, and adherence to the manual when delivering the introductory and core activities (3 items; α = 0.69). The classroom process index assessed the extent to which the objectives of the lesson were met and the implementer elicited student responses during the program activities, and how well the session was implemented overall (3 items; α = 0.73). The quality of delivery index averaged implementer enthusiasm, implementer confidence, the extent to which he/she treated students respectfully, and the pace of the lesson (4 items; α = 0.66). The perceived student acceptance index assessed how interested the students appeared to be, how much they seemed to like the implementer, and class control (3 items; α = 0.76).

Student assessment procedures

Student self-report surveys were administered over one class period at pretest and immediate posttest. At both waves, packets that included the survey and instructions for its completion were left for students who were absent on testing days. If absentee students failed to return the posttest survey, they were contacted for telephone-based survey administration. Students were told that they were not expected to complete the full questionnaire. They were asked to complete as many items as they were able to in the one class period. At five minutes before the end of the posttest period, students in the program conditions were instructed to stop completing any other item and begin completing the program evaluation items appended to the end of the survey.

Immediate outcomes and program process measures

Student questionnaire measures were adapted from previous studies of Project TND (Sussman et al., 2002). Demographic characteristics included age (in years), gender, and ethnicity (coded as white, Hispanic, Black, Asian, Native American, or other).

We examined four theoretical mediators of Project TND that have been associated with substance use in previous studies (Rohrbach, Sussman, Dent, & Sun, 2005; Sussman & Dent, 1996; Sussman, McCuller, & Dent, 2003). Substance use intentions is an index of four items assessing how likely the student is to use cigarettes, alcohol, marijuana, and hard drugs (cocaine, amphetamines, heroin, etc.), respectively, in the next 12 months (α = 0.81). Addiction concern is an index of three items (Sussman & Dent, 1996) that assess students’ perceptions of the likelihood that they will become a drug abuser, addict, and alcoholic, respectively (α = 0.79). Social self-control is an average of ten items that assess self- control skills in social situations (α = 0.73; Sussman et al., 2003). Program-specific knowledge was measured with 27 items designed to assess learning of the content of the two separate curricula. Items were scored as correct or incorrect, summed, and converted to a percentage correct score for analysis.

In the process evaluation section of the posttest, students rated the program on nine positive adjectives (e.g., “enjoyable,” “believable,” “interesting”), which were averaged to comprise the program acceptance index (α = .90). Using measures parallel to those in the observation instrument, students rated the implementer's confidence, ability to elicit student participation, respectfulness toward students, ability to understand students, and likeability during the program lessons. These five items were averaged to comprise the evaluation of implementer index (α = 0.93). The overall program rating index averaged how much students liked each of the 12 curriculum lessons (α = 0.94). The classroom climate index averaged five items derived from Moos’ Classroom Environment scale (Moos, 1979) that assessed perceived attention to the program lessons by other students (α = 0.85).

Data analysis

Inter-rater reliability was calculated for each implementation fidelity item. For the continuous items, we calculated the intra-class correlation, or one minus the proportion of total variance due to observers nested within observed classroom sessions (Shrout & Fleiss, 1979). For the dichotomous variable, a weighted kappa was calculated (Cohen, 1968).

To test for a main effect of implementer type on fidelity and immediate student outcomes, we used a mixed-linear model (Murray & Hannan, 1990) as implemented in the SAS statistical package® (SAS Institute, 2004). In each of the models, implementer type (regular classroom teacher or program specialist) was considered as a fixed effect and class nested within school was considered as a random factor. In models using observer ratings, observer (nested within session) was also considered as a random effect. Models that examined the effect of implementer type on immediate program outcomes were conditional, including the pretest score on the respective variable as a co-variate.

Results

Program fidelity

Table 1 presents the mean, standard deviation, and inter-rater reliability for each variable that comprised the fidelity indexes. Inter-rater reliability ranged from .60 to .89, which is considered good to excellent (Nunally, 1978; Tinsley & Weiss, 1975).

Table 1
Means, standard deviations, and inter-rater reliability for fidelity variables

The results of the multi-level models that tested the effect of implementer type on program fidelity are shown in Table 2. The quality of delivery rating for program specialists was marginally significantly higher than that for classroom teachers (p = .08). On the other fidelity index variables, ratings of specialists and teachers were not significantly different.

Table 2
Main effect of implementer type on program fidelity

Immediate outcomes

Table 3 shows that there was a significant main effect for experimental condition (i.e., both program groups vs. controls) on program-specific knowledge and social self-control (p's < .05), and a marginally significant effect on addiction concern (p < .10). There was no main effect for two immediate outcome variables, intentions regarding future drug use and motivation to improve.

Table 3
Main and implementer type effects on immediate outcomes

Post-hoc comparisons indicated there were no differences between program specialist-led and teacher-led classrooms in pre- to post-test changes in knowledge, self-control skills, and addiction concern.

Table 4 compares the mean ratings of students in the two program conditions on four process evaluation composite variables. Both groups rated the program and the implementers fairly highly. Students in the program specialist-led classrooms perceived a more positive classroom climate for the program relative to students in the teacher-led classrooms (p < .05).

Table 4
Program students’ ratings on process evaluation measures, by implementer type

Discussion

The results of the present study indicate that both trained high school teachers and outside program specialists achieved effects on three of the five immediate outcomes of Project TND. Furthermore, classroom teachers implemented the program with levels of fidelity that were nearly as high as those demonstrated by the program specialists. Similarly, students in teacher-led and specialist-led classrooms provided comparable posttest ratings of the program overall and the quality of program implementation.

One of the strengths of this study is its design, which allowed for comparison of program delivery by two types of implementers while controlling for the potentially con founding effects of the school setting. Previous studies have not randomly assigned both implementer types within the same schools (e.g., Cameron et al., 1999; McNeal et al., 2004). The second strength of our study is the use of observer assessments of program fidelity. To date, only a few prevention studies have used observational methods to collect fidelity data (e.g., Botvin, Baker, Dusenbury, Botvin, & Diaz, 1995; Harrington, Giles, Hoyle, Feeney, & Yungbluth, 2001; Rohrbach et al., 1993), even though observations are thought to be more objective, valid, and reliable than implementer self-reports (Dane & Schneider, 1998; Dusenbury et al., 2003; Hansen & McNeal, 1999). Using multiple raters to assess key dimensions of fidelity, we demonstrated good-to-excellent inter-rater reliability for all variables.

The study focused on implementation fidelity, program-specific knowledge, drug use intentions and beliefs, and students’ responsiveness as immediate program outcomes. While the results suggest that classroom teachers can and will implement Project TND with fidelity and achieve effects on immediate program outcomes, it is important to demonstrate that teachers will achieve effects on student drug use. It was disappointing that neither program specialists nor teachers produced a main effect on drug use intentions at immediate posttest, and it is possible that neither program condition will demonstrate an effect on drug use at the one-year follow-up. At present we are collecting one-year follow-up data, and in future papers we will examine the effectiveness of the program overall, as well as the comparability of program effects achieved by the two types of program implementers.

One limitation of the study that should be noted is that the immediate outcome variables we examined are hypothesized to mediate program effects, but to date we have not conducted studies that identify specific mediators of Project TND. Of the immediate outcome variables we examined, addiction concern, self-control skills, and drug use intentions have been shown to predict adolescent substance use in previous studies (Rohrbach et al., 2005; Sussman & Dent, 1996; Sussman et al., 2003), suggesting that these may be signifi-cant mediators. In addition, student acceptance of prevention programs has been positively associated with implementation fidelity (e.g., Rohrbach et al., 1993). However, more research is needed to determine whether these immediate outcomes are mediators of Project TND. In our analyses of the one-year outcomes of the effectiveness trial, we will examine the relations of these items to the motivation-skills-decision-making program model (Sussman et al., 2004) and as mediators of changes in student substance use.

A second limitation of the study is that classroom teachers were selected by their school principal on the basis of their enthusiasm for the program and willingness to implement it. Further, teachers received generous compensation for participating in the training and completing self-report background surveys. These study conditions differ from the “real world” of prevention programming in schools, in that teachers rarely receive extra compensation for implementing prevention programs and they may be asked to implement a new program regardless of their level of enthusiasm for it (Rohrbach et al., 1996). The teachers in the present study may have been much more motivated than what one might expect to encounter in high school settings. Thus, the results of the study may be generalizable only to school settings in which substantial incentives for program implementation are provided. However, we view this study as fitting along a continuum of research on Project TND, from efficacy to dissemination. The study is what Flay (1986) has called a “treatment effectiveness” trial, in which the optimized delivery of evidence-based programming in real-world settings is carefully assessed. One might argue that the use of payments to teachers in a study of this type was justified to optimize program delivery. In the future, we plan to conduct “implementation effectiveness” and “dissemination” (Flay, 1986) studies of Project TND, which will carefully assess the relatively natural delivery of the program and the conditions that facilitate or impede its widespread use.

In conclusion, the study results indicate that there was little difference in fidelity and immediate program effectiveness between regular classroom teachers and outside specialists. The results suggest that motivated, trained classroom teachers can implement evidence-based prevention programs with fidelity and produce immediate effects. However, more randomized studies are needed to determine if proven prevention programs achieve longer-term program effects when implemented in typical school contexts by teachers, or other providers different from those in efficacy research.

Acknowledgements

This study was funded by grants from the National Institute on Drug Abuse (Grant numbers R01-DA13814 and R01-DA-16090). We wish to thank Jason McCuller for his project management.

Footnotes

1In California, alternative high schools are referred to as “continuation” high schools. Students in continuation high schools have transferred out of the regular school system due to functional problems (e.g., lack of credits, drug use). These youth are at risk for dropout, but they have transferred to an alternative school to fulfill a California mandate that all youth receive at least part-time education until they are 18 years of age.

References

  • Basen-Engquist K, O'Hara-Tompkins N, Lovato CY, Lewis MJ, Parcel GS, Gingiss P. The effect of two types of teacher training on implementation of smart choices: A tobacco prevention curriculum. Journal of School Health. 1994;64:334–339. [PubMed]
  • Botvin GJ, Baker E, Dusenbury L, Tortu S, Botvin EM. Preventing adolescent drug abuse through a multimodal cognitive-behavioral approach: Results of a 3-year study. Journal of Consulting and Clinical Psychology. 1990;58:437–446. [PubMed]
  • Botvin GJ, Baker E, Dusenbury L, Botvin EM, Diaz T. Long-term follow-up results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association. 1995;273:1106–1112. [PubMed]
  • Botvin GJ, Baker E, James-Ortiz S, Botvin EM, Kerner J. Smoking prevention among urban minority youth: Assessing effects on outcome and mediating variables. Health Psychology. 1992;11:290–299. [PubMed]
  • Cameron R, Brown KS, Best JA, Pelkman CL, Madill CL, Manske SR, Payne ME. Effectiveness of a social influences smoking prevention program as a function of provider type, training method, and school risk. American Journal of Public Health. 1999;89:1827–1831. [PubMed]
  • Clarke GN. Improving the transition from basic efficacy research to effectiveness studies: Methodological issues and procedures. Journal of Consulting and Clinical Psychology. 1995;63(5):718–725. [PubMed]
  • Cohen J. Weighted kappa: Nominal scale agreement with provision for scale disagreement or partial credit. Psychological Bulletin. 1968;70:213–220. [PubMed]
  • Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review. 1998;18(1):23–45. [PubMed]
  • Dent CW, Sussman S, McCuller WJ, Stacy AW. Project towards no drug abuse: Generalizabilty to a general high school sample. Preventive Medicine. 2001;32:514–520. [PubMed]
  • Dent CW, Sussman S, Hennesy M, Craig S, Moss MA, Stacy AW. Implementation, process, and immediate outcomes evaluation of the project towards no drug abuse classroom program for high risk youth. Journal of Drug Education. 1998;28:361–375. [PubMed]
  • Domitrovich CE, Greenberg MT. The study of implementation: current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation. 2000;11(2):193–221.
  • Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education Research. 2003;18(2):237–256. [PubMed]
  • Flay BR. Efficacy and effectiveness trials (and other phases of research in the development of health promotion programs. Preventive Medicine. 1986;15:451–474. [PubMed]
  • Glasgow RE, Lichtenstein E, Marcus AG. Why don't we see more translation of health promotion research to practice? Re-thinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93(8):1261–1266. [PubMed]
  • Gottfredson DC, Wilson DB. Characteristics of effective school-based substance abuse prevention. Prevention Science. 2003;4(1):27–38. [PubMed]
  • Graham JW, Flay BR, Johnson CA, Hansen WB, Grossman LM, Sobel JL. Reliability of self-report measures of drug use in prevention research: Evaluation of the project SMART questionnaire via the test-retest reliability matrix. Journal of Drug Education. 1984;14:175–193. [PubMed]
  • Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research. Evaluation & the Health Professions. 2006;29(1):126–153. [PubMed]
  • Greenwald P, Cullen JW. The new emphasis in cancer control. Journal of the National Cancer Institute. 1985;74:543–551. [PubMed]
  • Hansen WB, Graham JW, Wolkenstein BH, Rohrbach LA. Program integrity as a moderator of prevention program effectiveness: Results for fifth grade students in the adolescent alcohol prevention trial. Journal of Studies on Alcohol. 1991;52:568–579. [PubMed]
  • Hansen WB, McNeal RB. Drug education practice: Results of an observational study. Health Education Research. 1999;14:85–97. [PubMed]
  • Harrington NG, Giles SM, Hoyle RH, Feeney GJ, Yungbluth SC. Evaluation of the all stars character education and problem behavior prevention program: Effects on mediator and outcome variables for middle school students. Health Education & Behavior. 2001;28(5):533–546. [PubMed]
  • Joyce B, Showers B. Improving inservice training: The messages of research. Educational Leadership. 1980;37:379–385.
  • McNeal RB, Hansen WB, Harrington NG, Giles SM. How all stars works: An examination of program effects on mediating variables. Health Education and Behavior. 2004;31(2):1–14. [PubMed]
  • Moos RH. Evaluating educational environments. Jossey-Bass; San Francisco, CA: 1979.
  • Mowbray CT, Holter MC, Teague GB, Bybee D. Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation. 2003;24(3):315–340.
  • Murray DM, Hannan PJ. Planning for the appropriate analysis in school-based drug-use prevention studies. Journal of Consulting and Clinical Psychology. 1990;58(4):458–468. [PubMed]
  • Nunally J. Psychometric theory. 2nd ed. McGraw-Hill; New York: 1978.
  • Perry CL, Murray DM, Griffin G. Evaluating the statewide dissemination of smoking prevention curricula: Factors in teacher compliance. Journal of School Health. 1990;60:501–504. [PubMed]
  • Resnicow K, Davis M, Smith M, Lazarus-Yaroch A, Baranowski T, Baranowski J, Doyle C, Wang DT. How best to measure implementation of school health curricula: a comparison of three measures. Health Education Research. 1998;13(2):239–250. [PubMed]
  • Rohrbach LA, D'Onofrio CN, Backer TE, Montgomery SB. Diffusion of school-based substance abuse prevention programs. American Behavioral Scientist. 1996;39(7):919–934.
  • Rohrbach LA, English J, Hansen WB, Johnson CA. Development and pilot testing of project SMART. In: Sussman S, editor. Handbook of program development for health behavior research and practice. Sage Publications; Thousand Oaks, CA: 2000. pp. 425–446.
  • Rohrbach LA, Graham JG, Hansen WB. Diffusion of a school-based substance abuse program: Predictors of program implementation. Preventive Medicine. 1993;22:237–260. [PubMed]
  • Rohrbach LA, Grana R, Sussman S, Valente TW. Type II translation: Transporting prevention interventions from research to real-world settings. Evaluation & the Health Professions. 2006;29(3):1–32. [PubMed]
  • Rohrbach LA, Sussman S, Dent CW, Sun P. Tobacco, alcohol, and other drug use among high-risk young people: a five-year longitudinal study from adolescence to emerging adulthood. Journal of Drug Issues. 2005;35(2):333–356.
  • SAS Institute SAS release 9.0. Cary. SAS Institute, Inc.; North Carolina: 2004.
  • Schoenwald SK, Hoagwood K. Effectiveness, transportability, and dissemination of interventions: What matters when? Psychiatric Services. 2001;52(9):1190–1197. [PubMed]
  • Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin. 1979;86:420–427. [PubMed]
  • Skara S, Rohrbach LA, Sun P, Sussman S. An evaluation of the fidelity of implementation of a school-based drug abuse prevention program: Project Towards No Drug Abuse (TND). Journal of Drug Education. 2005;35(4):305–329. [PubMed]
  • St. Pierre TL, Osgood DW, Mincemoyer CC, Kaltreider DL, Kauh TJ. Results of an independent evaluation of project ALERT delivered in schools by cooperative extension. Prevention Science. 2005;6(4):305–317. [PubMed]
  • Sun W, Skara S, Sun P, Dent CW, Sussman S. Project towards no drug abuse: Long-term substance use outcomes evaluation. Preventive Medicine. 2006;42:188–192. [PubMed]
  • Sussman S, Dent CW. Correlates of addiction concern among adolescents at high risk for substance abuse. Journal of Substance Abuse. 1996;8:361–370. [PubMed]
  • Sussman S, Dent CW, Burton D, Stacy AW, Flay BR. Developing school-based tobacco use prevention and cessation programs. Sage Publications, Inc.; Thousand Oaks: 1995.
  • Sussman S, Dent CW, Stacy AW. Project towards no drug abuse: A review of the findings and future directions. American Journal of Health Behavior. 2002;26(5):354–365. [PubMed]
  • Sussman S, Earleywine M, Wills T, Cody C, Biglan T, Dent CW, Newcomb MD. The motivation, skills, and decision-making model of ‘drug abuse’ prevention. Substance Use & Misuse. 2004;39(10–12):1971–2016. [PubMed]
  • Sussman S, Galaif ER, Newman T, Hennesy M, Pentz MA, Dent WW, Stacy AW, Moss MA, Craig S, Simon TR. Implementation and process evaluation of a “school-as-community” group: A component of a school-based drug abuse prevention program. Evaluation Review. 1997;21:94–123. [PubMed]
  • Sussman S, McCuller WJ, Dent CW. The associations of social self-control, personality disorders, and demographics with drug use among high-risk youth. Addictive Behaviors. 2003;28:1159–1166. [PubMed]
  • Tinsley HE, Weiss DJ. Interrater reliability and agreement of subjective judgments. Journal of Counseling Psychology. 1975;22:358–376.
  • Tobler NS, Stratton HH. Effectiveness of school-based drug prevention programs: A meta-analysis of the research. Journal of Primary Prevention. 1997;18(1):71–128.
  • Tobler NS, Roona MR, Ochshorn P, Marshall DG, Streke AV, Stackpole KM. School-based adolescent drug prevention programs: 1998 meta-analysis. Journal of Primary Prevention. 2000;20(4):275–336.
  • Tortu S, Botvin GJ. School-based smoking prevention: The teacher training process. Preventive Medicine. 1989;18:280–289. [PubMed]
  • U.S. Department of Education [DOE] Preliminary overview of programs and changes included in the No Child Left Behind Act of 2001: Safe and drug-free schools and communities (Title IV, Part A). [October 6, 2005];2005 http://www.ed.gov/nclb/overview/intro/progsum.