Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Account Res. Author manuscript; available in PMC 2014 January 1.
Published in final edited form as:
PMCID: PMC3726025

The Cycle of Bias in Health Research: A Framework and Toolbox for Critical Appraisal Training

Donna H. Odierna, DrPH, MS,1 Susan R. Forsyth, RN, MS,2 Jenny White, MSc, MPH,1 and Lisa A. Bero, PhD1


Recognizing bias in health research is crucial for evidence-based decision making. We worked with eight community groups to develop materials for nine modular, individualized critical appraisal workshops we conducted with 102 consumers (four workshops), 43 healthcare providers (three workshops), and 33 journalists (two workshops) in California. We presented workshops using a “cycle of bias” framework, and developed a toolbox of presentations, problem-based small group sessions, and skill-building materials to improve participants’ ability to evaluate research for financial and other conflicts of interest, bias, validity, and applicability. Participant feedback indicated that the adaptability of the toolbox and our focus on bias were critical elements in the success of our workshops.

Keywords: Conflicts of interest, industry sponsorship of health studies, bias in research, evidence-based medicine, informed decision making, research ethics


The ability to recognize the risk of bias in research is an important critical appraisal skill. Bias related to methodological flaws or conflicts of interest (COI) can produce poor-quality, inaccurate studies. Policy and practice guidelines that are based on poor-quality research can have negative effects on individuals and populations. Patients, consumers, and other end-users of research may be able to make better health and healthcare decisions if they can critically appraise evidence for bias, validity, and credibility. Shared evidence-based decision making is improved when patients and consumers are empowered by the ability to understand the relative risks and benefits of proposed treatments and health policies; such understanding is crucial for balanced, effective communications among healthcare providers, patients, payers, and policy makers (Greenhalgh 2010; Berger et al. 2010; Cassel Ck 2012; Rodwin 2001).

Risk of bias is increased by methodological study characteristics that can introduce systematic error into the results. These characteristics include unreliable methods for generating the allocation sequence, poor allocation concealment, and failure to blind participants, personnel, and outcome assessors (Higgins and Green 2008). Furthermore, methodological biases in studies have been associated with overestimates of the efficacy of test treatments (Ioannidis 2005). In controlled clinical drug trials, studies with a high risk of bias, such as those lacking randomization, blinding, and allocation concealment produce larger treatment effect sizes, thus falsely inflating the efficacy of the drugs, compared to studies that have these design features (Schulz et al. 1995; Schulz and Grimes 2002a, 2002b). Scientific articles that overstate benefits and applicability or understate harms of medical interventions may lead evidence-minded practitioners and patients to make suboptimal treatment decisions. This can result in overutilization of medical services with increased wasteful spending and harm to patients (Grady and Redberg 2010). Moreover, studies of research sponsored by the tobacco, pharmaceutical, and other industries show that industry-funded studies are disproportionally likely to produce findings that favor the sponsor’s intervention or that support public health policies that benefit the funder (White and Bero 2010). A recent Cochrane review has shown that pharmaceutical industry-sponsored studies overestimate the efficacy and underestimate the harm of their treatments, even when controlling for methodological biases (Lundh et al. 2012). Thus, study funding sources and financial COIs of investigators should be considered as risks of bias (Roseman et al. 2011; Roseman et al. 2012).

Because COIs increase the risk that studies may be biased in design, conduct, or reporting, study sponsorship and the industry ties of investigators need to be taken into account to enable practitioners, consumers, and policy makers to use research to make good evidence-based decisions, and to enable journalists to accurately report on health research (Cook et al. 2007; Moynihan 2003; Bero and Jadad 1997). The ability to evaluate the presence or absence of bias is equally important. The perception of bias even when it may not be present can undermine readers’ confidence in particular studies as well as in scientific integrity overall (Shamoo and Resnik 2009). COI-related bias is not the only concern; methodological flaws threaten a study’s internal and external validity, reducing both its ability to answer its research question and the relevance of its findings to particular patients, populations, and situations.

Media reports that are based on biased or poor-quality studies can reflect these studies’ flaws, often omitting detailed accounts of effect sizes or salient study design features such as exclusion/inclusion criteria or comparability of intervention and control dosages (Cassels et al. 2003; Moynihan et al. 2000; Cook et al. 2007). Industry-funded studies are often used in marketing campaigns, for example in pharmaceutical advertisements that appear in medical journals, promotional materials distributed by drug companies, and direct-to-consumer marketing of drugs (Lundh et al. 2012; Steinman et al. 2006; Stryer and Bero 1996). End-users of research and other health information may lack the critical appraisal skills to adequately evaluate validity, applicability, and risk of bias (Jewell and Bero 2008; Akl et al. 2011), undermining their ability to use health evidence for their own benefit.

This paper describes the toolbox of learning aids we used in critical appraisal training workshops we conducted for consumer and patient groups, healthcare workers, and journalists (Odierna et al. 2012,). These workshops focused on increasing participants’ ability to recognize bias and use unbiased, methodologically sound research in communicating with others and in making their own healthcare decisions, including decisions about tobacco use and exposure to second-hand smoke, which is a priority for our funder (FAMRI 2007). Previous academic-led workshops teaching critical appraisal skills have focused primarily on medical doctors and other clinicians (Berger et al. 2010; Coomarasamy, Taylor, and Khan 2003; Green and Ellis 1997; Hyde et al. 2011; Nabulsi et al. 2007; Parkes et al. 2001). We designed our workshops to meet the needs of consumers/patients, health care journalists, and non-prescribing healthcare workers. We developed a modular toolbox that we used in two pilot trainings in 2009 then adapted for nine critical appraisal workshops that we conducted in 2010 in California, USA.

The Workshops

The nine workshops were attended by 178 participants (102 consumers/patients, 43 non-prescribing healthcare workers, and 33 journalists), and were offered in community-based settings. We used our toolbox of materials to help us meet the aims of our workshop: to improve participants’ understanding of the basic concepts of evidence-based medicine and potential sources of bias in health research, to increase their confidence in their ability to understand research, and to support their use of evidence. The workshops are described in detail elsewhere (Odierna, White, Boland et al. 2010; Odierna et al. 2012). They were developed in collaboration with community and professional organizations, with the advice of key community informants, and in consultation with an academic/community partnership (Clinical and Translational Science Institute 2012). We offered the workshops through community organizations. They were co-facilitated by academics and community members.

The workshops consisted of didactic and participatory modules and varied in length from three to seven hours. We chose a problem-based “user pull” approach that makes training more relevant to learners than a systematic “push” of pre-selected topics and encourages the application of research evidence to local policy and practice (Lavis et al. 2006; Oliver et al. 2004; Dorsch et al. 1990). As a result, the workshops, subject matter, and materials varied according to our community partners’ priorities, and were designed to accommodate varied learning styles. Throughout the project, we used an iterative process to modify and add materials in response to feedback from our community partners and workshop participants.

We organized workshops around a “cycle of bias,” framework (Figure 1), which we developed to provide a strong and easily understood map of places in the research process particularly vulnerable to intentional or unintentional bias. Throughout the workshops, we asked participants to think about how research questions are framed by authors and to determine whether a study’s methodology and conduct allow it to answer its research questions. We discussed inclusion and exclusion criteria, and asked participants about whether or not results could be confidently applied to particular people or situations. We examined studies’ stated methodology, protocols, and conduct. Finally, we discussed publication, including presentation of findings, selective outcome reporting, and downstream dissemination of results in, for example, media reports and marketing campaigns. Our use of the cycle of bias perspective differs from typical critical appraisal trainings that focus primarily on methodological risks of bias and internal validity. During lectures and small group practice sessions, participants identified research questions and potential sources of intentional and unintentional bias using the PICO/PECO mnemonic (population, intervention/exposure, comparison, outcome.) They als used PICO/PECO to determine whether a study was likely to apply to particular people or situations, what treatment was being tested and what it was being compared to, and whether or not the health outcomes being studied were meaningful to them, or in the case of providers and journalists, to their patients or audience. They were encouraged to think about how study funding sources and investigators’ financial conflicts of interest might affect study design, methodology, conduct, presentation of results (e.g., absolute vs. relative risk reduction), and characterization by both a study’s authors and subsequent media reports of findings as significant, important, or widely applicable.

Figure 1
Cycle of Bias Framework for Evaluating Health Studies

The Toolbox

The toolbox includes materials that we developed or adapted in response to the concerns of our key informants and community partners. We also adapted and incorporated elements from other problem-based critical appraisal trainings (Dickersin et al. 2001; Dorsch et al. 1990; Green and Ellis 1997; Guyatt et al. 2008; Sackett et al. 2000; Shaneyfelt et al. 2006; Hadley, Wall, and Khan 2007). The limited time-availability of our community partner groups placed us under substantial constraints. We could offer half-day or full-day sessions, depending on the needs of each individual group. Half-day workshops included a single 1.5 hour small-group session. Full-day workshops included two 1.5 hour small-group sessions. Most of our community partners asked us not to pre-assign materials for their members to read because they thought this might discourage them from attending, or that they would not do the reading in advance. Nonetheless, we were committed to offering workshops in real-world settings, to people who might not otherwise have access to this type of training. Therefore, we supplied participants with tools to help them quickly identify research questions, results, and potential sources of bias and misinformation in journal articles and media reports within the time allotted in the small-group sessions:

Journal Article and Media Report Markups

We provided annotated handouts of scientific journal and mainstream media articles on topics of interest to our participants for use in the small-group sessions. Although we annotated and presented entire journal articles, we emphasized abstracts and sections of media reports because most of our workshop participants did not have free access to the scientific literature. Throughout the project we modified materials in response to the needs of our participants. Examples of markup excerpts, aggregated here for space, are shown in Figures 2--44 below.

Figure 2
Excerpts from a Journal Article Abstract1 and Discloaures, Annotated with the PECO Mnemonic (Population, Exposure, Comparison, Outcome) and Commentary
Figure 4
Excerpts from a Journal Article1 Abstract and Disclosures, and Subsequent Media Reports2-4 Annotated to Show Study Population, Outcome, Conflicts of Interest, and Presentation of Results

Scientific journal articles

We selected papers for each workshop after consulting with our community partners. We included at least one tobacco-relevant paper among the selections we offered, explaining that this was in concordance with the priorities of our funder and the research focus of several of the authors (LB, JW, and SF). Participants used these journal article markups during the experiential small-group learning portions of the workshops. We used Adobe Acrobat X Pro (Adobe Systems Incorporated) to annotate article PDFs in advance to show PICO/PECO, research questions, results, absolute and relative risk differences, conclusions, and financial disclosures. The markups were used as teaching tools and springboards for discussions. For example, we asked participants to consider how health activists, payers, and radiotherapy equipment manufacturers might characterize what authors of a publicly-funded study (Hughes et al. 2004) reported as a statistically significant but modest 1% absolute risk reduction in breast cancer recurrence in older women who received adjuvant radiotherapy, when results could also be described as a relative risk reduction of over 300%.

Media reports

We also annotated and distributed “downstream” reports from various newspapers and other media sources that were based on the journal articles we used in the small group sessions, enabling the participants to see how research findings are translated into news stories and how these may change over time (Figure 4). We discussed how headlines vary following the publication of high-visibility papers. For example, in the week following the publication of an industry-funded study that reported beneficial effects of statins on a broad population of patients (Ridker et al. 2008), stories appeared in the New York Times entitled Cholesterol-Fighting Drugs Show Wider Benefit (Belluck 11-10-2008); Who Should Take a Statin? (New York Times Editorial 11-17-2008); and A Call for Caution in the Rush to Statins (Parker-Pope 11-18-2008). We provided explicit examples of how results can be reported in the press. For example, we showed how the same results showing reduced risk of adverse outcomes can be reported as relative risk reductions: “54% fewer,” “almost 50% less likely,” and absolute risk reductions: “…reduced by less than one percentage point.” We also suggested that participants compare mainstream media reporting of COI with COI disclosures in journal publications.

Guides for Participants

We distributed worksheets for participants to complete with PICO/PECO questions during the facilitated small groups, one-page summaries of study findings using the PICO/PECO format for reference, and spaces for their own comments. We provided each participant with laminated critical appraisal pocket guides, adapted from Guyatt et al. 2008, listing important skills and key points.

Guides for Facilitators

Small groups and discussions were co-facilitated by academic researchers and community members, some of whom did not have extensive training in research methods. We supplied all facilitators with discussion guides for the studies used in the small group sessions. These included key points in the cycle of bias for the studies.


Participants in the earliest workshops said that they needed a glossary to use during and after the workshops. We consulted the literature (Last 2001; Polit and Tatano-Beck 2008; Greenhalgh 2010; Aveyard 2007; Crombie 1996) to produce a glossary of short, understandable definitions for terms we used throughout the workshops.

Resource Lists and Other Materials

We provided and discussed lists of books, articles, websites, critical appraisal guides, and online courses on critical appraisal, bias in research, and evidence-based medicine. We modified the lists according to the needs and interests of the participant groups. We gave journalists a “Conflict of Interest” tipsheet and links to an article with a juried list of independent medical/research professionals (i.e., who had not received industry funding in the previous five years) who were identified as available sources of health information (Lenzer and Brownlee 2008).

Participant Feedback

In oral and written workshop evaluations, participants praised the tools, particularly visual props, the PICO/PECO format, and tools used in hands-on learning sessions. The markups of “real papers” and “concrete examples” were most frequently named as “the best thing about today’s workshop” on evaluation forms. Comments included “Walking through the case studies and analyzing each section was very helpful to grasp the information,” “I took multiple stats and research classes and still found it helpful to have some basic idea of what is most important to look for.” Many participants spoke of the value of learning about the difference between absolute and relative risk. Several participants questioned whether our choice of materials was influenced by our funder’s focus on the harms of tobacco and second-hand smoke; we had disclosed our funding affiliation at the start of each workshop. We used these comments as a springboard for discussions about different types of COI and affiliation bias. This issue becomes ever more relevant to students of critical appraisal as funding agencies and journals increasingly require disclosure of investigators’ competing interests and appearance of conflicts of interest (U.S. Department of Health and Human Services 2011; Institute of Medicine 2009; Graf et al. 2007) and such information becomes available to readers. However, we reiterated that COI does not necessarily lead to biased findings, and pointed out that some bias is methodological and can be ascribed to error rather than misrepresentation.

Some participants said that although they found the annotated articles helpful, it was hard to fully understand papers without actually reading them through. Even so, the overall results of our workshops, reported in detail elsewhere (Odierna et al. 2012), appeared promising. Briefly, participants showed immediate and sustained significant (p< .001) improvements in knowledge (absolute increases over baseline of about 20% post-workshop; 15% at the 12-month follow-up) and confidence (absolute increases over baseline of about 29% post-workshop; 27% at follow-up). At follow-up, more than half of the participants said that they had applied workshop skills in healthcare communication and decision making as well as in critical appraisal of health information. About a third of the responses indicated that participants felt more knowledgeable and confident, while only 2% said they had not used the workshop skills or materials at all, and more than 10%, mostly journalists, used them at work. Throughout the project, and in follow-up comments, workshop participants expressed particular concern about bias related to COI, and said the workshops left them better prepared to evaluate this when they use health information.


The tools we developed for this project emphasized the focus of critical appraisal training on the risks of bias beyond those that are traditionally covered. Our cycle of bias framework encouraged participants to not only identify and evaluate studies’ methodological biases, but also potentially biasing effects of funding sources, investigator conflicts of interest, and author affiliations. The tools were also intended to help participants evaluate published study findings, consider the potential risk of selective outcome reporting, and recognize inaccuracies and spin in media reports. Moreover, we emphasized the importance of considering the composition of study populations and inclusion/exclusion criteria when deciding whether or not findings can be applied with confidence to particular patients, populations, and real-world situations. Although we did not evaluate the tools separately, the success of our workshops indicates that an adaptive approach can increase the ability of diverse groups of patients and other community-based learners to identify potential biases and critically appraise journal articles and media reporting on health interventions.

Although industry-sponsored studies may have a high risk of bias (Lundh et al. 2012; Yank, Rennie, and Bero 2007; Bero et al. 2007; Barnes and Bero 1998) and be less likely than publicly-funded studies to include minority and disadvantaged populations (Dickerson et al. 2009; Rochon et al. 1998; Rochon et al. 2004), research can vary in methodological quality, accuracy, completeness, and generalizability of results regardless of sponsorship, other COIs, or author ideology or affiliation (Resnik 1998). Therefore, it is crucial for all stakeholders to understand how to detect bias and to know where bias may be introduced in the design, conduct, and reporting of health research.

The participants in our workshops were relatively healthy people who were interested in learning how to use the best evidence to support their well being. Even before the workshops began, over 90% of them said they were concerned about COI in research. The level of interest may vary among target populations, but nonetheless there may be benefits of including training in critical appraisal of research, including appraisal of potential biases, in educational interventions to increase health literacy in other groups of learners. Trainings could be offered to patients, patient advocates, and providers who share concerns about health issues or particular medical conditions, leading to collaborative efforts for unbiased, valid evidence-based policy and practice. Educating patients and others who are potential research participants could also be beneficial in other ways. A recent systematic review (Kirkby et al. 2012) found that about half of potential participants in medical research wanted information about investigator COI. Their level of interest varied, particularly when research was considered to be high-risk, but the limited ability of potential participants to articulate their concerns or evaluate the implications of COI may hinder the effective disclosure of this information (Weinfurt et al. 2006).

Adaptable tools for teaching the detection of bias could have broad utility in online and mixed-venue courses for diverse groups of participants, including non-traditional learners. These groups could learn to critically appraise health information for validity, risk of bias, and generalizability. This has the potential to make such information accessible for use in shared healthcare decision making and health advocacy. However, to expand critical appraisal training to participants from underserved and disadvantaged populations, it will be necessary to identify resources for developing tools and curricula in close cooperation with community partners. This may require, for example, providing payment and other incentives to train community members to recruit participants and facilitate workshops.

The tools we used in our trainings were adapted for each workshop after consultation with representatives of our partner organizations. The tools evolved over the course of the project in response to participants’ suggestions. We used an individualized approach, simplifying scientific terms and mathematical concepts while maintaining the core concepts. We used subject matter chosen by our community partners. Our ability to recommend a single curriculum or complete set of tools for teaching diverse learners about sources of bias in research is limited by our adaptive approach. However, individualized modifications can make critical appraisal skills accessible for patients and other populations, and they are relatively easy to implement when conducting workshops developed from basic modular designs such as ours.

Figure 3
Annotated Meta-Analysis1 Forest Plots and Summary Statistics of the Relationship between Industry Sponsorship and Research Outcome


The authors gratefully acknowledge the UCSF/CTSI Community Engagement consultants for advice on partnering with community groups, Maureen Boland for research assistance, and Lisa Hirsch for proofreading and editing. We give special thanks to our community partners: the Newcomers Health Program and the San Francisco Community Clinic Consortium at the San Francisco Department of Public Health, the Ohlone Herbal Center, Breast Cancer Action, the Disability Rights Education and Defense Fund (DREDF), the Kaiser Foundation Hospital Professional Performance Committee in Hayward, CA, the California Association of Retired Americans (CARA), the UC Berkeley Graduate School of Journalism, and the Northern California Association of Healthcare Journalists.

Funding: This project was funded by the Flight Attendant Medical Research Institute (FAMRI), Miami FL USA. This project was also supported by NIH/NCRR UCSF-CTSI Grant Number UL1 RR024131. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.


Human Subjects: The project was approved by the Committee on Human Research at the University of California, San Francisco, approval numbers H2758-33589 and 10-02507.


  • Akl EA, Oxman AD, Herrin J, Vist GE, Terrenato I, Sperati F, Costiniuk C, Blank D, Schunemann H. Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database Syst Rev. 2011;3 CD006776. [PubMed]
  • Aveyard H. Doing a Literature Review in Health and Social Science: A Practical Guide. McGraw-Hill; New York: 2007.
  • Barnes DE, Bero LA. Why review articles on the health effects of passive smoking reach different conclusions. JAMA. 1998;279(19):1566–70. [PubMed]
  • Belluck P. Cholesterol-fighting drugs show wider benefit. New York Times. 2008 Nov 10; November 10.
  • Berger B, Steckelberg A, Meyer G, Kasper J, Muhlhauser I. Training of patient and consumer representatives in the basic competencies of evidence-based medicine: a feasibility study. BMC Med Educ. 2010;10:16. [PMC free article] [PubMed]
  • Bero LA, Jadad A, R. How consumers and policymakers can use systematic reviews for decision making. Ann Intern Med. 1997;127(1):37–42. [PubMed]
  • Bero L, Oostvogel F, Bacchetti P, Lee K. Factors associated with findings of published trials of drug-drug comparisons: why some statins appear more efficacious than others. PLoS Med. 2007;4(6):e184. [PMC free article] [PubMed]
  • Cassel CK, Guest JA. Choosing wisely: Helping physicians and patients make smart decisions about their care. JAMA: The Journal of the American Medical Association. 2012;307(17):1801–1802. [PubMed]
  • Cassels A, Hughes MA, Cole C, Mintzes B, Lexchin J, McCormack JP. Drugs in the news: an analysis of Canadian newspaper coverage of new prescription drugs. CMAJ. 2003;168(9):1133–7. [PMC free article] [PubMed]
  • Clinical and Translational Science Institute . Community Engagement & Health Policy. Univeristy of California; San Francisco: [cited August 9 2012]. 2012. 2012. Available from
  • Cochrane Collaboration Nicotine Replacement therapy for smoking cessation. 2008 [PubMed]
  • Cook DM, Boyd EA, Grossmann C, Bero LA. Reporting science and conflicts of interest in the lay press. PLoS One. 2007;2(12):e1266. [PMC free article] [PubMed]
  • Coomarasamy A, Taylor R, Khan KS. A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Med Teach. 2003;25(1):77–81. [PubMed]
  • Crombie IK. The Pocket Guide to Critical Appraisal. BMJ Publishing Group; London: 1996.
  • Dickersin K, Braun L, Mead M, Millikan R, Wu AM, Pietenpol J, Troyan S, Anderson B, Visco F. Development and implementation of a science training course for breast cancer activists: Project LEAD (leadership, education and advocacy development) Health Expect. 2001;4(4):213–20. [PubMed]
  • Dickerson K, Leeman RF, Mazure CM, O’Malley SS. The inclusion of women and minorities in smoking cessation clinical trials: a systematic review. Am J Addict. 2009;18(1):21–8. [PMC free article] [PubMed]
  • Dorsch JL, Frasca MA, Wilson ML, Tomsic ML. A multidisciplinary approach to information and critical appraisal instruction. Bull Med Libr Assoc. 1990;78(1):38–44. [PMC free article] [PubMed]
  • Enstrom JE, Kabat GC. Environmental tobacco smoke and tobacco related mortality in a prospective study of Californians, 1960-98. BMJ. 2003 May 17;326(7398):1057. [PMC free article] [PubMed]
  • FAMRI . Statements. Flight Attendant Medical Research Institute; [cited August 9 2012]. 2012. 2007. Available from <>.
  • Grady D, Redberg RF. Less is more: how less health care can result in better health. Archives of Internal Medicine. 2010;170(9):759–50. [PubMed]
  • Graf C, Wager E, Bowman A, Fiack S, Scott-Lichter D, Robinson A. Best Practice Guidelines on Publication Ethics: a Publisher’s Perspective. International Journal of Clinical Practice. 2007;61:1–26. [PubMed]
  • Green ML, Ellis PJ. Impact of an evidence-based medicine curriculum based on adult learning theory. J Gen Intern Med. 1997;12(12):742–50. [PMC free article] [PubMed]
  • Greenhalgh T. How to Read a Paper: The Basics of Evidence-based Medicine. 4th ed Wiley-Blackwell; UK: 2010.
  • Guyatt G, Rennie D, Meade MO, Cook DJ. Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. JAMA & Archives Journals. Second ed McGraw-Hill Medical; New York: 2008.
  • Hadley JA, Wall D, Khan KS. Learning needs analysis to guide teaching evidence-based medicine: knowledge and beliefs amongst trainees from various specialities. BMC Med Educ. 2007;7:11. [PMC free article] [PubMed]
  • Higgins JPT, Green S. In: Cochrane Handbook for Systematic Reviews of Interventions. Cochrane Book Series. Cochrane Collaboration, editor. Wiley-Blackwell; Chichester, England: 2008.
  • Hughes KS, Schnaper LA, Berry D, Cirrincione C, McCormick B, Shank B, Wheeler J, Champion LA, Smith TJ, Smith BL, Shapiro C, Muss HB, Winer E, Hudis C, Wood W, Sugarbaker D, Henderson IC, Norton L, L. Lumpectomy plus tamoxifen with or without irradiation in women 70 years of age or older with early breast cancer. N Engl J Med. 2004;351(10):971–7. [PubMed]
  • Hyde C, Parkes J, Deeks J, Milne R. Systematic review of effectiveness of teaching critical appraisal (Structured abstract) Database of Abstracts of Reviews of Effects. 2011;(4)
  • Institute of Medicine; E. Committee on Conflict of Interest in Medical Research, and Practice, Board on Health Sciences Policy, editor. Conflict of interest in medical research, education, and practice. Committee on Conflict of Interest in Medical Research, Education, and Practice, Institute of Medicine. “Front Matter. ” Conflict of Interest in Medical Research, Education, and Practice. Washington, DC: The National Academies Press, 2009. National Academies Press; Washington, DC: 2009.
  • Ioannidis J. Why Most Published Research Findings Are False. PLoS Med. 2005;2(8):e124. [PMC free article] [PubMed]
  • Jewell CJ, Bero LA. “Developing Good Taste in Evidence”: Facilitators of and Hindrances to Evidence-Informed Health Policymaking in State Government. Milbank Quarterly. 2008;86(2):177–208. [PubMed]
  • Kirkby HM, Calvert M, Draper H, Keeley T, Wilson S. What potential research participants want to know about research: a systematic review. BMJ Open. 2012;2(3) [PMC free article] [PubMed]
  • Last JM. A Dictionary of Epidemology. 4th ed Oxford University Press; New York: 2001.
  • Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bulletin of the World Health Organization. 2006;84:620–628. [PubMed]
  • Lenzer J, Brownlee S. Naming names: is there an (unbiased) doctor in the house? BMJ. 2008;337:a930. [PubMed]
  • Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003;326(7400):1167–70. [PMC free article] [PubMed]
  • Lundh A, Sismondo S, Lexchin J, Busuioc OA, Bero LA. Industry sponsorship and research outcome. Cochrane Database of Systematic Reviews. 2012 Dec 12;12 MR000033. doi: 10.1002/14651858.MR000033.pub2. [PubMed]
  • Moynihan R. Making medical journalism healthier. Lancet. 2003;361(9375):2097–8. [PubMed]
  • Moynihan R, Bero LA, Ross-Degnan D, Henry D, Lee K, Watkins J, Mah C, Soumerai SB. Coverage by the news media of the benefits and risks of medications. N Engl J Med. 2000;342(22):1645–50. [PubMed]
  • Nabulsi M, Harris J, Letelier L, Ramos K, Hopayian K, Parkin C, Porzsolt F, Sestini P, Slavin M, Summerskill W, et al. Effectiveness of education in evidence-based healthcare: the current state of outcome assessments and a framework for future evaluations. Int J Evid Based Health c. 2007;5:468–476. [PubMed]
  • New York Times, editor. Who should take a statin? New York Times. 2008 Nov 17; November 17.
  • Odierna DH, White J, Boland M, Bero LA. The role of critical appraisal training in healthcare decision making and communication. Paper read at Cochrane and Campbell Colloquium.2010.
  • Odierna DH, White J, Forsyth S, Bero LA. Critical appraisal training increases understanding and confidence, and enhances the use of evidence in diverse categories of learners. Health Expectations. 2012 Dec 12; in press. 16. doi: 10.1111/hex.12030. [Epub ahead of print] [PubMed]
  • Oliver S, Clarke-Jones L, Rees R, Milne R, Buchanan P, Gabbay J, Gyte G, Oakley A, Stein K. Involving consumers in research and development agenda setting for the NHS: developing an evidence-based approach. Health Technol Assess. 2004;8(15):1–148. III–IV. [PubMed]
  • Parker-Pope T. A call for caution in the rush to statins. New York Times. 2008 Nov 18; November 18.
  • Parkes J, Hyde C, Deeks Jj., Milne R. Teaching critical appraisal skills in health care settings. Cochrane Database Syst Rev. 2001;(3) CD001270. [PubMed]
  • Polit DF, Tatano-Beck C. Nursing Research: Generating and Assessing Evidence for Nursing Practice. 8th ed Lippincott Williams and Wilkins; Philadelphia: 2008.
  • Resnik DB. Conflicts of Interest in Science. Persepctives on Science. 1998;6(4):381–408.
  • Ridker PM, Danielson E, Fonseca FA, Genest J, Gotto AM, Jr, Kastelein JJ, Koenig W, Libby P, Lorenzatti AJ, MacFadyen JG, Nordestgaard BG, Shepherd J, Willerson JT, Glynn RJ, JUPITER Study Group Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein. N Engl J Med. 2008;359(21):2195–207. [PubMed]
  • Rochon PA, Clark JP, Binns MA, Patel V, Gurwitz JH. Reporting of gender-related information in clinical trials of drug therapy for myocardial infarction. CMAJ. 1998;159(4):321–7. [PMC free article] [PubMed]
  • Rochon PA, Mashari A, Cohen A, Misra A, Laxer D, Streiner DL, Clark JP, Dergal JM, Gold J. The inclusion of minority groups in clinical trials: problems of under representation and under reporting of data. Account Res. 2004;11(3-4):215–23. [PubMed]
  • Rodwin MA. The Politics of Evidence-Based Medicine. Suffolk University Health & Biomedical Law. 2001 Paper 3. [PubMed]
  • Roseman M, Milette K, Bero LA, et al. Reporting of conflicts of interest in meta-analyses of trials of pharmacological treatments. JAMA: The Journal of the American Medical Association. 2011;305(10):1008–1017. [PubMed]
  • Roseman M, Turner ET, Lexchin J, Coyne JC, Bero LA, Thombs BD. Reporting of conflicts of interest from drug trials in Cochrane reviews: cross sectional study. BMJ. 2012:345. [PubMed]
  • Sackett D, Straus SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2 ed Churchill Livingstone; New York: 2000.
  • Schulz KF, Grimes DA. Allocation concealment in randomised trials: defending against deciphering. The Lancet. 2002a;359(9306):614–618. [PubMed]
  • Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. The Lancet. 2002b;359(9307):696–700. [PubMed]
  • Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA: The Journal of the American Medical Association. 1995;273(5):408–412. [PubMed]
  • Shamoo AE, Resnik DB. Conflicts of Interest and Scientific Objectivity. In: Shamoo A, Resnik D, editors. Responsibile Conduct of Research. Original edition. Oxford University Press; Oxford: 2009. 2003.
  • Shaneyfelt T, Baum KD, Bell K, Feldstein D, Houston TK, Kaatz S, Whelan C, Green M. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006;296(9):1116–27. [PubMed]
  • Steinman MA, Bero LA, Chren MM, Landefeld CS. Narrative review: the promotion of gabapentin: an analysis of internal industry documents. Ann Intern Med. 2006;145(4):284–93. [PubMed]
  • Stryer D, Bero LA. Characteristics of materials distributed by drug companies. Journal of General Internal Medicine. 1996;11:575–583. [PubMed]
  • U.S. Department of Health and Human Services . Promoting Objectivity in Research for which Public Health Service Funding is Sought and Responsible Prospective Contractors. In: Register F, editor. Rules and Regulations. National Institutes of Health; 2011. [PubMed]
  • Weinfurt KP, Friedman JY, Allsbrook J, Dinan MA, Hall M, Sugarman J. Views of Potential Research Participants on Financial Conflicts of Interest: Barriers and Opportunities for Effective Disclosure. J Gen Intern Med. 2006;21(9):901–906. [PMC free article] [PubMed]
  • White J, Bero LA. Corporate manipulation of research: Strategies are similar across five industries. Stanford Law & Policy Review. 2010;21(1):105–134.
  • Yank V, Rennie D, Bero LA. Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study. BMJ. 2007;335(7631):1202–5. [PMC free article] [PubMed]