PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Eval Program Plann. Author manuscript; available in PMC 2010 October 15.
Published in final edited form as:
PMCID: PMC2955513
NIHMSID: NIHMS243089

Conflict of Interest in the Evaluation and Dissemination of “Model” School-based Drug and Violence Prevention Programs

Abstract

Conflict of interest refers to a set of conditions in which professional judgment concerning the validity of research might be influenced by a secondary competing interest. The competing interest that has received most attention in the literature addressing the prevalence and effects of such conflicts on the practice of empirical research has been that of financial relationships between investigators and research sponsors. The potential for conflicts of interest to arise in the evaluation of drug prevention programs was raised by Moskowitz in this journal in 1993, but to date there has been no attempt made to estimate the scope of this problem. The present study addressed this issue using a sample of “model” school-based drug and violence prevention interventions by first, identifying the types or relationships that exist between program developers and program distributors, and, second, by assessing how many of the evaluations of these programs published in peer-reviewed journals had been conducted by the developers of the programs compared to independent evaluation teams. The data presented indicate that there are relatively few published evaluations that do not involve program developers and that there are few instances in which there is complete separation between the program developer and program distributor. Using the open systems model of the Institute of Medicine Committee on Research Integrity as a framework, it is argued that the culture and norms of the program developer and those of the program evaluator are fundamentally distinct and therefore failure to separate these roles produces high potential for conflict of interest to arise.

1. Introduction

1.1 Evaluation of Drug Prevention Programs

One-and-a half decades ago, Joel Moskowitz (1993) published a paper in this journal that raised serious questions about the quality of outcome research conducted in the field of drug prevention program evaluation. He concluded that the shortcomings present in the design, implementation and data analysis of evaluations was not simply the result of limitations of resources but rather stemmed from the broader structural and institutional context within which research was conducted. Among the institutional pressures, Moskowitz included conflicts of interest, noting that:

“Unfortunately, much of the drug abuse prevention research conducted to date suffers from real or apparent conflicts of interest. Evaluations are often conducted to prove that a program merits funding or to market the program on a broader scale. Many investigators evaluate programs that they, or their institutions, have developed and intend to market. Thus, the financial interests of the investigators and their institutions may be directly affected by the outcomes of the research, increasing the likelihood of bias in reporting methods and results” (Moskowitz, 1993, p. 7).

Since the publication of Moskowitz's paper, the types of drug education programs he discussed have become the mainstay of prevention policy in the United States (National Institute on Drug Abuse, 2003; Schinke, Brounstein & Gardner, 2002). While concerns about the quality of evaluations of these programs continue to be raised (e.g., Brown, 2001; Gandhi et al., 2007; Gorman, 1998; 2002; Manski, Pepper & Petrie, 2001), these are never mentioned in reviews of the literature written by program developers (e.g., Botvin & Griffin, 2004; Catalano et al., 2004) or in documents that describe so-called “model” or “research-based” programs (e.g., Schinke, Brounstein & Gardner, 2002; US Department of Education Safe, Disciplined, and Drug-Free Schools Expert Panel, 2002). Indeed, drug prevention evaluation has become a field of research in which critical debate about issues pertaining to the design, implementation and analysis of the most widely advocated programs is almost entirely absent (Gorman, 2005). It is therefore hardly surprising that the issue of conflict of interest is almost never raised in the drug prevention literature.

1.2 Conflict of Interest

As Tobin (2003) observes, the term “conflict of interest” refers to “a set of conditions in which professional judgment concerning a primary interest, such as the validity of research, might be influenced by a secondary interest, such as financial gain” (Tobin, 2003, page 1161). Tobin further draws a distinction between conflict of interest and bias. The latter occurs when a researcher's judgment concerning his/her primary interest (i.e., the production of objective knowledge) has been clearly influenced by some secondary and competing interest. In contrast, a conflict of interest exists irrespective of whether the researcher's judgment and behavior can be demonstrated to have been adversely influenced – that is, it exists simply as a condition of the researcher having two competing interests.

The competing interest that has received most attention in the literature addressing the prevalence and effects of such conflicts on the practice of empirical research has been that of financial relationships between investigators and research sponsors. The primary focus of studies that have addressed this issue has been on the scope and influence of the pharmaceutical industry's funding of biomedical research. These studies show that financial relationships between the pharmaceutical industry and researchers are widespread (about 25% of researchers have industry affiliations) and that there exists a systematic bias in the reporting of study outcomes favoring the products of those companies sponsoring the research (Bekelman, Yan & Gross, 2003; Lexchin et al., 2003; Stelfox et al., 2003). Industry sponsorship has also been found to be associated with the use of inappropriate study designs, irregular data analysis and reporting practices (e.g., selective reporting among numerous outcome variables, multiple subgroup analysis), and publication delay (see also Gotzsche, 1989; Bero & Rennie, 1996; Melander et al., 2003).

Recently these types of analyses have been extended beyond the examination of biomedical research. Cosgrove and colleagues (2006), for example, assessed the pharmaceutical industry's relationship with members of the advisory panels that recommend changes in the Diagnostic and Statistical Manual of Mental Disorders, and found that 56% of these individuals had financial ties with drug companies. And although the literature is less comprehensive, studies have also been conducted that examine the influence of other large industries on the quality of empirical research that they sponsor, notably the tobacco and food industries (Bero, Galbraith & Rennie, 1994; Barnes & Bero, 1998; Lesser et al., 2007; Levine et al., 2003).

1.3 The Institute of Medicine's Open Systems Model of Conflict of Interest

It is easy to be skeptical about research funded by multinational industries whose profit motive clearly outweighs concern for public health. However, from a purely fiscal standpoint there is nothing unique to the industries mentioned above when it comes to having a vested interest in the outcome of studies into the effectiveness of their products. As Tobin (2003) observes, the primary obligation of any manufacturer is to deliver a sound financial return on an investment, and hence there is an inevitable vested interest in the products that are manufactured being perceived as effectively performing the functions for which they were intended with minimal adverse side effects. This is as true of a school-based prevention curriculum as it is of a pharmaceutical pain-killer, a soft drink or a cigarette. It is this interest in the success of the product that is fundamentally at odds with the disinterested orientation that is so basic to the norms of the practice of science (Tobin, 2003; Merton, 1973, ch.13).

The divergence between these two norms and their influence upon the practice of research can best be understood using the open-systems model employed in the Institute of Medicine's (2002) recent report on research integrity. The model highlights the fact that conflict of interest typically occurs within a complex organizational system that has a structure and culture, requires inputs of financial and human resources in order to function, and exists in order to produce outputs in the form of goods, products, services, and activities. Using this model, we argue that the following basic differences exit between the organizational culture of the typical institution that functions solely to develop drug and violence prevention intervention programs and the organizational culture of the typical institution that functions solely to evaluate such programs.1

Program Developer Organizational Culture

  • Mission: develop and disseminate programs
  • Audience: consumers, practitioners, policy-makers
  • Commitment to belief system and/or financial return from product
  • Culture: advocacy and promoting of products
  • Norms: interest, commitment, belief

Program Evaluator Organizational Culture

  • Mission: produce and disseminate knowledge
  • Audience: researchers, scientists, scholars, practitioners, policy-makers
  • Commitment to science, objectivity, “truth”
  • Norms: skeptical, critical, rational, inquisitorial

It can be seen that while there is some overlap in the audience to which their work is targeted, there are also fundamental differences between the two organizational cultures. Specifically, the researcher's skepticism and commitment to unearthing the truth is likely to conflict, at least on some occasions, with the programs developer's belief in, and advocacy of, his/her program. Indeed, if one accepts the principle of equipoise (that is, that evaluations should only be conducted when there is genuine doubt about the efficacy of an intervention) then one would expect the results of an evaluation to conflict with the developer's positive expectations in about half of the studies conducted. Thus, given the context within which the evaluation occurs, the potential for conflicts of interest to occur is high when either the program evaluator and program developer are employed by the same organization (Figure 1a) or when the program developer and program evaluator are one and the same person (Figure 1b).

Figure 1Figure 1
Open-systems Models of Organizational Contexts in Which Conflict of Interest Exists between Program Developer and Program Evaluator (Adapted from the Institute of Medicine, 2002)

1.4 Study Objectives

The present study examined the relationship between program developers and program evaluators using a sample of “model” school-based drug and violence prevention interventions. Specifically, we addressed the following two questions. First, what is the nature of the relationship between the developers of these “model” programs and the organizations that distribute them? Second, what proportions of evaluations of these programs that have been published in peer-reviewed journals have been conducted by the developers of the programs compared to independent evaluation teams?

2. Methods

2.1 Sample of School-Based Drug Prevention Programs

The sample of “model” drug and violence prevention programs used in the analysis was taken from the Substance Abuse and Mental Health Services Administration's (SAMHSA) National Registry of Effective and Promising Programs (NREPP). We chose the NREPP list as this is the most comprehensive of the best practice drug prevention lists available and has been influential within the field given its sponsoring agency. The research reported here is part of a larger study designed to assess the types of data analysis and presentation practices used in drug prevention research that commenced in late 2005. Thus, the NREPP model program list used in this study was the one available on the SAMHSA website at this point in time. While the NREPP rating criteria and selection procedure have subsequently been revised (Substance Abuse and Mental Health Services Administration, 2006), the list of 66 model programs used in our larger study is still accessible on the SAMHSA website, as are the materials (such as program fact-sheets) produced for each.

According to the fact-sheets that appear on the SAMHSA webpage for each program, eight were designated “treatment”. Of the remaining 58 prevention programs, 34 were designated “school-based” or “school-based/community” (hereafter referred to as “school-based”) and 16 were designated just “community” on their fact sheets. Of the remaining eight programs, three were designated “environmental”, four “workplace”, and one was a secondary prevention program targeted at heavy drinking college students. Here we focused on the 34 school-based interventions since these are the most extensively evaluated group of programs and the ones most often packaged in a form that can be sold commercially (typically as curricula).

2.2 Data Collection: Identification of Evaluation Studies

Two types of data pertaining to the 34 school-based NREPP programs were collected for the present study. First we sought to identify all evaluations of the programs that had been published in peer reviewed journals. The initial step in this process was to conduct a literature search using the general search engine of the Texas A&M University library system, which searches the following databases: the University's Library catalog; the Medical Sciences Library Catalog; Academic Search Premier (EBSCO); Academic Search Premier(EBSCO); MLA Bibliography (EBSCO); PsycINFO 1872-current (CSA); Science Direct; ERIC (EBSCO); and CAB Abstracts (Ovid). The name of the program was first searched for by itself, followed by the name of the program developer. In the next step in identifying appropriate publications to review, the abstracts obtained through step 1 were reviewed by one of the authors (DMG) to ensure that the publication directly pertained to the NREPP program. The lists that resulted from this process (which included journal articles, book chapters and books) were then sent to the program developers, who were informed that we were interested in the material that had been used in establishing the program as a NREPP model intervention. Since it was difficult to establish exactly when this status had been conferred, we sent each developer the complete list of evaluation papers that we had identified through steps 1 and 2 described above. We asked each to review the list and amend it as necessary. Thirty-two of the program developers agreed to review the list and make appropriate edits, one refused, and one stated that she thought we had missed some relevant papers upon her initial review but failed to provide additional information upon subsequent requests.

For the present analysis, we limited our focus to publications that had appeared in peer reviewed journals. The revised lists that we received from the program developers (along with the two lists that were not reviewed by the developers) were further reviewed to ensure that the publication described an evaluation of the program. In order to be included as an “evaluation”, the publication had to include a description of the study design and data pertaining to a process or implementation evaluation and/or an outcome evaluation (including studies focused just on mediational analysis or cost-effectiveness data). The definition was broad in the sense that it set no restrictions on the type of study design, type of data reported, or the length of follow-up. Multiple reports from a single study were also included. Review articles summarizing findings from a series of published evaluations were excluded, as were those publications that only described the conceptual basis of the program or its development and components. Those that just described the design of the evaluation study or that used study data to test etiological models were also excluded.

2.3 Data Collection: Program Developer-Program Distributor Relations

The second type of data we collected pertained to the relationship between the program developer and program distributor. Each SAMHSA fact-sheet lists the program developer and either this or the SAMHSA website states how the program can be obtained. In many cases it was clear as to the type of institution or agency that distributed the program (for example, a university). In those instances in which it was not clear, we searched ReferenceUSA and LexisNexis using the program's name in order to obtain more details. Finally, for those programs that were distributed by a third party (i.e., not by an organization that the developer owned or directed, or by which he/she was employed), we reviewed publications to identify disclosure statements that specified the type of financial relationship between the developer and distributor. In those instances where this information could not be found in the public domain, we contacted the developer to ask the type of a financial arrangement he/she had with the distributor (e.g., licensing agreement, royalty payment).

3. Results

3.1 Distribution of Programs

Table 1 presents details of the relationships between the developers of the 34 school-based NREPP model programs and the distributors of the programs. We grouped the relationships into five broad categories (rows a-e), with two of the programs being placed in two categories (row f) since the developers both directed companies that provided training in the use of the program and distributed the programs through a publishing company. Two additional points should also be noted. First, one developer had three programs on the list (all of which were distributed by the university at which he worked) and another had two (each of which was distributed by a publishing company). Second, the relationships described in the table are those that existed in late 2006. This is especially important in the case of those programs that are distributed by a third party, as most of these relationships were some time after the program was initially developed (indeed, in some instances many years). Publishers typically purchase and distribute established programs that have been evaluated in one or more studies. Thus, the distribution mechanisms for some programs may well have changed over time (e.g., from a university to a publishing company).

Table 1
NREPP School-based Programs: Relationship between Program Developer and Program Distributor (n=34)

The most direct financial relationships between the program developer and distributor exist in those cases where the former owns or directs the company that distributes the program (or provides training in it) or receives remuneration from a third party (typically a publishing company) that sells the program. The distribution of 17 of the 34 programs involved such relationships. The majority of the remaining programs (15) were distributed by the organization for which the developer worked either as an employee or a consultant. Nine of these were universities (row d) and six were private companies (row c). The remaining two programs were distributed by a third party from whom the developers received no royalty payment. In one case this was a charitable foundation and in the other a voluntary health organization.

3.2 Evaluation of Programs

The search procedures described in the methods section produced a total of 246 evaluation studies. For two of the programs (both in the category Developer Employed by Private Company that Distributes Program) there were no evaluations published in peer reviewed journals. In addition, there was one other program that was part of a multi-component intervention that was also on the NREPP list. These two programs (both in the category Developer Distributes Program through Third Party with which he/she has a Financial Relationship) were treated as one in the present analysis, since we did not want to double count the two papers that pertained to both programs. The range of published evaluations across the remaining 31 programs was 1 to 37.

The majority (193/246) of the published evaluation reports included the program developer as an author. Only 27 of the 246 publications were totally independent in the sense that the program developer was not one of its authors. These 27 publications came from evaluations of just nine of the programs. In the case of the remaining 26 publications, while the program developer was not an author on these there was some association between him/her and the authors. Specifically, the developer had either published previously with at least one of the authors of the publication, or worked in the same organization as the author(s), or was a co-investigator on the project from which the publication came, or was acknowledged by the authors in the publication for contributions to the project.

4. Discussion

This examination of the 34 school-based programs that appear on the NREPP list of model drug prevention programs suggests that little has been done to address Moskowitz's concern that “much of the drug abuse prevention research conducted to date suffers from real or apparent conflicts of interest” (Moskowitz, 1993, p. 7). The data presented indicate that there are relatively few published evaluations of these programs that do not involve program developers and that there are few instances in which there is complete separation between the program developer and program distributor.

With regard to the first of these issues, it was argued in the introduction that given the difference between the organizational culture of an agency that develops intervention programs and one that evaluates these, separation of the roles of program developer and program evaluator is preferable in the assessment of the effectiveness of interventions, at least if one's primary goal is to limit conflicts of interest and reduce the potential for bias or distortion that can result from advocacy of the intervention. Examples from the drug prevention field of such separation of roles include most of the evaluations of the DARE program (e.g., Clayton, Cattarello & Johnstone, 1996; Ennett et al., 1994) and the Hutchinson Smoking Prevention Project (Petersen et al., 2000). These studies produced little evidence of program effectiveness. This is consistent with other areas of evaluation research which show that studies in which program evaluators were significantly involved in program delivery report substantially larger effect sizes than independent evaluations (Lipsey, 2005; Petrosino & Soydan, 2006).2

Lipsey (2005) argues that the most plausible explanation of the association between developer involvement in an evaluation and increased effect size is implementation integrity; program developers are likely to ensure that the program is delivered in the appropriate manner and with sufficient intensity. He contrasts this idea with a “cynical view” that attributes this finding to biasing or “wish fulfilling” emanating from the developer's vested interest in the outcome of the evaluation. This is clearly an area that requires further study, and we hope to be able to shed some light on this issue in our larger study by examining the types of data analysis and presentation practices used in evaluations that include program developers and those that are conducted by independent evaluation teams.

As for the relationship between program developers and program distributors we found that in 32 of 34 cases the developer had a financial relationship with the distributor. The nature of this relationship varied: in some cases the developer owned the distribution company, in some he/she received royalty or consulting payments from the distributor, and in some he/she was the distributor's employee. The latter case included developers who distribute their programs through a university. We did not examine the nature of the financial relationships here, for example if the revenue goes into salary savings or other discretionary accounts of the developer. In addition, even when there is no financial conflict in such relationships, there is the potential for what might be termed “ideological conflicts of interest” that arises from a different set of institutional pressures. This type of conflict is especially relevant to the evaluation of intervention policies and programs intended to prevent undesirable behaviors such as drug use, and results from adherence to a specific set of beliefs, values or theories that are resistant to rejection or modification when faced with conflicting evidence (Bachrach & Newcomer, 2002; MacCoun, 2006). A number of the programs on the NREPP list are part of a much broader theoretical or conceptual model that the developer/evaluator has also developed and built a research career upon. This type of potential conflict is probably unavoidable, since it is desirable that interventions are theory-based (Chen & Rossi, 1983) and researchers who are knowledgeable about drug use are likely to produce a better intervention than those who know little or nothing about this behavior. However, psychological theories are at times very resistant to modification (Meehl, 1978), and so independent evaluations of all prevention programs – not just those for which there exists a financial conflict of interest – is desirable.

The analysis described herein is exploratory in nature and limited by its focus on just 34 programs and 246 publications. Since conflict of interest is an important issue it deserves further empirical analysis, especially in light of the emphasis now placed on the identification and dissemination of evidenced-based interventions in the field of drug and violence prevention. While the organizations that develop and disseminate these programs obviously have a different mission to businesses such as the food and pharmaceutical industries, it is likely that many of the same institutional pressures will arise as the marketplace for these interventions becomes more lucrative. As noted in one of the independent evaluations that we reviewed: “In 21st century America, education materials are a significant business, one that is becoming increasingly sophisticated and competitive” (Seifer et al., 2004, p.482).

Acknowledgements

This work was supported by grant number R01 NS 49611-01 from the National Institute of Alcohol Abuse and Alcoholism. We thank the program developers who provided us with information concerning the evaluation and distribution of their programs.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1The characteristics of the program evaluator and program developer cultures that we present are premised on a couple of assumptions. First, following Campbell (1984; 1988), we consider evaluation research to be a form of science and critical thinking to be the distinguishing feature of the practice of science. Accordingly, evaluators should be distrustful, skeptical and disputatious and committed to the use of critical tests capable of falsifying the hypothesis that “Program X works” (see Gorman, 2005 for details). Second, and again in line with Campbell (1988), we maintain that it is highly probable that program developers and program administrators are going to identify with their programs and advocate (if not “over-advocate”) for their use and dissemination. We acknowledge that there are many in the evaluation field who do not share in these assumptions, and that they are fundamentally at odds with the basic tenets of many other evaluation models (e.g., the constructivist approach or the responsive approach) (see Alkin, 2004 and Stufflebean, 2001 for detailed discussions of these issues). We also acknowledge that what we present are “idea type” organizational cultures. However, there have been instances in which evaluations of drug and violence prevention programs have been conducted by totally independent research teams using what appears to be an entirely critical and skeptical approach to their subject matter (e.g., Clayton et al., 1996; Hallfors et al., 2006; Petersen et al., 2000). Likewise, there are clear instances where the claims of effectiveness made by program advocates go far beyond existing empirical evidence (Gandhi et al., 2007; Gorman, 1998).

2It should be noted that a number of the independent evaluations of the SAMHSA model programs examined herein found little evidence of program effectiveness (e.g., Bauer et al., 2007; Hallfors et al., 2006; Harrington t al., 2001; Orpinas et al., 2000; St. Pierre et al., 2005; Vicary et al., 2006). Thus, the results of these studies are at odds with those conducted by the program developers.

Contributor Information

Dennis M. Gorman, Department of Epidemiology & Biostatistics, Texas A&M Health Science Center School of Rural Public Health, College Station, TX 77845.

Eugenia Conde, Department of Sociology, Texas A&M University, College Station, TX 77845.

References

  • Alkin MC. Evaluation Roots: Tracing Theorists’ Views and Influences. Sage; Thousand Oaks, CA: 2004.
  • Bachrach C, Newcomer SF. Addressing bias in intervention research: summary of a workshop. Journal of Adolescent Health. 2002;31:311–321. [PubMed]
  • Barnes DE, Bero LA. Why review articles on the health effects of passive smoking reach different conclusions. Journal of the American Medical Association. 1998;279:1566–1570. [PubMed]
  • Bauer NS, Lozano P, Rivara FP. The effectiveness of the Olweus Bullying Prevention Program in public middle schools: A controlled trial. Journal of Adolescent Health. 2007;40:266–274. [PubMed]
  • Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research. Journal of the American Medical Association. 2003;289:454–465. [PubMed]
  • Bero LA, Rennie D. Influences on the quality of published drug studies. International Journal of Technology Assessment in Health Care. 1996;12:209–237. [PubMed]
  • Bero LA, Galbraith A, Rennie D. Sponsored symposia on environmental tobacco smoke. Journal of the American Medical Association. 1994;271:612–617. [PubMed]
  • Botvin GJ, Griffin KW. Life Skills Training: empirical findings and future directions. Journal of Primary Prevention. 2004;25:211–232.
  • Brown JH. Youth, drugs and resilience education. Journal of Drug Education. 2001;31:83–122. [PubMed]
  • Campbell DT. Can we be scientific in applied social science? In: Connor RF, Altman DG, Jackson C, editors. Evaluation Studies: Review Annual. Vol. 9. Sage; Beverly Hills, CA: 1984. pp. 26–48.
  • Campbell DT. The experimenting society. In: Overman ES, editor. Methodology and Epistemology for Social Science: Selected Papers - Donald T. Campbell. University of Chicago Press; Chicago: 1988. pp. 290–314.
  • Catalano RF, Berglund ML, Ryan JAM, Lonczak HS, Hawkins JD. Positive youth development in the United States: Research findings on evaluations of positive youth development programs. Annals of the American Academy of Political & Social Science. 2004;591:98–124.
  • Chen HT, Rossi P. Evaluating with sense: the theory-driven approach. Evaluation Review. 1983;7:283–302.
  • Clayton RR, Cattarello A, Johnstone BM. The effectiveness of Drug Abuse Resistance Education (Project DARE): 5-year follow-up results. Preventive Medicine. 1996;25:307–318. [PubMed]
  • Cosgrove L, Krimsky S, Vijayaraghavan M, Schneider L. Finacial ties between DSM-IV panel members and the pharmaceutical industry. Psychotherapy and Psychosomatics. 2006;75:154–160. [PubMed]
  • Ennett ST, Rosenbaum DP, Flewelling RL, Bieler GS, Ringwalt CL, Bailey SL. Long-term evaluation of Drug Abuse Resistance Education. Addictive Behaviors. 1994;19:113–125. [PubMed]
  • Gandi AG, Murphy-Graham E, Petrosino A, Chrismer SS, Weiss CH. The devil is in the details: examining the evidence for “proven” school-based drug abuse prevention program. Evaluation Review. 2007;31:3–1-74. [PubMed]
  • Gorman DM. The irrelevance of evidence in the development of school-based drug prevention policy, 1986-1996. Evaluation Review. 1998;22:118–146. [PubMed]
  • Gorman DM. Defining and operationalizing “research-based” prevention: a critique (with case studies) of the US Department of Education's Safe, Disciplined and Drug-Free Schools Exemplary Programs. Evaluation and Program Planning. 2002;25:295–302.
  • Gorman DM. Drug and violence prevention: Rediscovering the critical rational dimension of evaluation research. Journal of Experimental Criminology. 2005;1:1–23.
  • Gorman DM. Conflicts of interest in the evaluation and dissemination of drug use prevention programs. In: Kleinig J, Einstein S, editors. Intervening in Drug Use: Ethical Challenges. Office of International Criminal Justice, Sam Houston State University; Huntsville, TX: 2006. pp. 171–187.
  • Gotzsche PC. Methodology and overt and hidden bias in reports of 196 double-blind trials of nonsteroidal anti-inflammatory drugs in rheumatoid arthritis. Controlled Clinical Trials. 1989;10:31–56. [PubMed]
  • Hallfors D, Cho H, Sanchez V, Khatapoush S, Kim HM, Bauer D. Efficacy vs effectiveness trial results of an indicated “model” substance abuse program: implications for public health. American Journal of Public Health. 2006;96:2254–2259. [PubMed]
  • Harrington NG, Giles SM, Hoyle RH, Feeney GJ, Yungbluth SC. Evaluation of the All Stars Character Education and Problem Behavior Prevention Program: effects on mediator and outcome variables for middle school students. Health Education & Behavior. 2001;28:533–546. [PubMed]
  • Institute of Medicine . Integrity in Scientific Research: Creating an Environment that Promotes Responsible Conduct. National Academies Press; Washington, DC: 2002.
  • Lesser LI, Ebbeling CB, Goozner M, Wypij D, Ludwig DS. Relationship between funding source and conclusion among nutrition-related scientific articles. PLos Medicine. 2007. p. e5. (Available at www.plosmedicine.org) [PubMed]
  • Levine J, Gussow JD, Hastings D, Eccher A. Authors’ financial relationships with the food and beverage industry and their published positions on the fat substitute olestra. American Journal of Public Health. 2003;93:664–669. [PubMed]
  • Lexchin J, Bero LA, Djulbegovic B, Clark O. Parmaceutical industry sponsorship and research outcome and quality: systematic review. British Medical Journal. 2003;326:1167–1176. [PMC free article] [PubMed]
  • Lipsey MW. What do we learn from 400 research studies on the effectiveness of treatment with juvenile delinquents? In: McGuire J, editor. What works? Reducing Reoffending. Wiley; New York: 1995. pp. 63–78.
  • MacCoun R. Conflicts of interest in public policy. In: Moore DA, Cain DM, Loewenstein G, Bazerman M, editors. Conflicts of Interest: Challenges and Solutions in Business, Law, Medicine, and Public Policy. Cambridge University Press; London: 2006. pp. 233–262.
  • Manski CF, Pepper JV, Petrie CV. Informing America's Policy on Illegal Drugs: What we don't know keeps hurting us. National Academy Press; Washington, DC: 2001.
  • Meehl PE. Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology. 1978;46:806–834.
  • Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B. Evidence b(i)ased medicine – selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. British Medical Journal. 2003;326:1171–1175. [PMC free article] [PubMed]
  • Merton RK. The Sociology of Science. University of Chicago Press; Chicago, IL: 1973.
  • Moskowitz JM. Why reports of outcome evaluations are often biased or uninterpretable: Examples from evaluations of drug abuse prevention programs. Evaluation and Program Planning. 1993;16:1–9.
  • National Institute on Drug Abuse . Preventing Drug Use among Children and Adolescents: A Research-Based Guide for Parents, Educators, and Community Leaders. Second Edition US Department of Health and Human Services; Bethesda, MD: 2003.
  • Orpinas P, Kelder S, Frankowski R, Murray N, Zhang Q, McAlister A. Outcome evaluation of a multi-component violence-prevention program for middle schools: the Students for Peace project. Health Education Research. 2000;15:45–58. [PubMed]
  • Peterson AV, Kealy KA, Mann SL, Marek PM, Sarason IG. Hutchinson Smoking Prevention Project: long-term randomized trial in school-based tobacco use prevention – results on smoking. Journal of the National Cancer Institute. 2000;92:1979–1991. [PubMed]
  • Petrosino A, Soydan H. The impact of program developers as evaluators on criminal recidivism: results from offender treatment meta-analyses. Journal of Experimental Criminology. 2005;1:435–450.
  • Schinke S, Brounstein P, Gardner S. Science-Based Prevention Programs and Principles, 2002. Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration; Rockville, MD: 2002.
  • Seifer R, Gouley K, Miller AL, Zakriski A. Implementation of the PATHS curriculum in an urban elementary school. Early Education and Development. 2004;15:471–485.
  • St. Pierre TL, Osgood DW, Mincemoyer CC, Kaltreider DL, Kauh TJ. Results of an independent evaluation of Project ALERT delivered in schools by cooperative extension. Prevention Science. 2005;6:305–317. [PubMed]
  • Stelfox HT, Chua G, O'Rouke K, Detsky AS. Conflict of interest in the debate over calcium-channel antagonists. New England Journal of Medicine. 2003;338:101–106. [PubMed]
  • Stufflebeam DL. Evaluation Models (New Directions for Evaluation, Number 89). Jossey-Bass; San Francisco, CA: 2001.
  • Substance Abuse and Mental Health Services Administration SAMHSA Model Programs. 2005. [February 1, 2007]. (Last up-dated March 7, 2005). Available at http://www.modelprograms.samhsa.gov/template_cf.cfm?page=model_list.
  • Substance Abuse and Mental Health Services Administration NREPP: SAMHSA’ National Registry of Evidenced-based Programs and Practice. 2006. [February 1, 2007]. Available at http://manila-test.manilaconsulting.net/nreppsamhsa/index.htm.
  • Tobin MJ. Conflict of interest and AJRCCM: restating policy and a new form to upload. American Journal of Respiratory and Critical Care Medicine. 2003;167:1161–1164. [PubMed]
  • US Department of Education Safe, Disciplined, and Drug-Free Schools Expert Panel . Exemplary and Promising Safe, Disciplined, and Drug-Free Schools Programs 2001. US Department of Education; Jessup, MD: 2002.
  • Vicary JR, Smith EA, Swisher JD, Bechtel LJ, Elek E, Henry KL, Hopkins AM. Results of a 3-year study of two methods of delivery of Life Skills Training. Health Education & Behavior. 2006;33:325–339. [PubMed]