|Home | About | Journals | Submit | Contact Us | Français|
Recognizing bias in health research is crucial for evidence-based decision making. We worked with eight community groups to develop materials for nine modular, individualized critical appraisal workshops we conducted with 102 consumers (four workshops), 43 healthcare providers (three workshops), and 33 journalists (two workshops) in California. We presented workshops using a “cycle of bias” framework, and developed a toolbox of presentations, problem-based small group sessions, and skill-building materials to improve participants’ ability to evaluate research for financial and other conflicts of interest, bias, validity, and applicability. Participant feedback indicated that the adaptability of the toolbox and our focus on bias were critical elements in the success of our workshops.
The ability to recognize the risk of bias in research is an important critical appraisal skill. Bias related to methodological flaws or conflicts of interest (COI) can produce poor-quality, inaccurate studies. Policy and practice guidelines that are based on poor-quality research can have negative effects on individuals and populations. Patients, consumers, and other end-users of research may be able to make better health and healthcare decisions if they can critically appraise evidence for bias, validity, and credibility. Shared evidence-based decision making is improved when patients and consumers are empowered by the ability to understand the relative risks and benefits of proposed treatments and health policies; such understanding is crucial for balanced, effective communications among healthcare providers, patients, payers, and policy makers (Greenhalgh 2010; Berger et al. 2010; Cassel Ck 2012; Rodwin 2001).
Risk of bias is increased by methodological study characteristics that can introduce systematic error into the results. These characteristics include unreliable methods for generating the allocation sequence, poor allocation concealment, and failure to blind participants, personnel, and outcome assessors (Higgins and Green 2008). Furthermore, methodological biases in studies have been associated with overestimates of the efficacy of test treatments (Ioannidis 2005). In controlled clinical drug trials, studies with a high risk of bias, such as those lacking randomization, blinding, and allocation concealment produce larger treatment effect sizes, thus falsely inflating the efficacy of the drugs, compared to studies that have these design features (Schulz et al. 1995; Schulz and Grimes 2002a, 2002b). Scientific articles that overstate benefits and applicability or understate harms of medical interventions may lead evidence-minded practitioners and patients to make suboptimal treatment decisions. This can result in overutilization of medical services with increased wasteful spending and harm to patients (Grady and Redberg 2010). Moreover, studies of research sponsored by the tobacco, pharmaceutical, and other industries show that industry-funded studies are disproportionally likely to produce findings that favor the sponsor’s intervention or that support public health policies that benefit the funder (White and Bero 2010). A recent Cochrane review has shown that pharmaceutical industry-sponsored studies overestimate the efficacy and underestimate the harm of their treatments, even when controlling for methodological biases (Lundh et al. 2012). Thus, study funding sources and financial COIs of investigators should be considered as risks of bias (Roseman et al. 2011; Roseman et al. 2012).
Because COIs increase the risk that studies may be biased in design, conduct, or reporting, study sponsorship and the industry ties of investigators need to be taken into account to enable practitioners, consumers, and policy makers to use research to make good evidence-based decisions, and to enable journalists to accurately report on health research (Cook et al. 2007; Moynihan 2003; Bero and Jadad 1997). The ability to evaluate the presence or absence of bias is equally important. The perception of bias even when it may not be present can undermine readers’ confidence in particular studies as well as in scientific integrity overall (Shamoo and Resnik 2009). COI-related bias is not the only concern; methodological flaws threaten a study’s internal and external validity, reducing both its ability to answer its research question and the relevance of its findings to particular patients, populations, and situations.
Media reports that are based on biased or poor-quality studies can reflect these studies’ flaws, often omitting detailed accounts of effect sizes or salient study design features such as exclusion/inclusion criteria or comparability of intervention and control dosages (Cassels et al. 2003; Moynihan et al. 2000; Cook et al. 2007). Industry-funded studies are often used in marketing campaigns, for example in pharmaceutical advertisements that appear in medical journals, promotional materials distributed by drug companies, and direct-to-consumer marketing of drugs (Lundh et al. 2012; Steinman et al. 2006; Stryer and Bero 1996). End-users of research and other health information may lack the critical appraisal skills to adequately evaluate validity, applicability, and risk of bias (Jewell and Bero 2008; Akl et al. 2011), undermining their ability to use health evidence for their own benefit.
This paper describes the toolbox of learning aids we used in critical appraisal training workshops we conducted for consumer and patient groups, healthcare workers, and journalists (Odierna et al. 2012,). These workshops focused on increasing participants’ ability to recognize bias and use unbiased, methodologically sound research in communicating with others and in making their own healthcare decisions, including decisions about tobacco use and exposure to second-hand smoke, which is a priority for our funder (FAMRI 2007). Previous academic-led workshops teaching critical appraisal skills have focused primarily on medical doctors and other clinicians (Berger et al. 2010; Coomarasamy, Taylor, and Khan 2003; Green and Ellis 1997; Hyde et al. 2011; Nabulsi et al. 2007; Parkes et al. 2001). We designed our workshops to meet the needs of consumers/patients, health care journalists, and non-prescribing healthcare workers. We developed a modular toolbox that we used in two pilot trainings in 2009 then adapted for nine critical appraisal workshops that we conducted in 2010 in California, USA.
The nine workshops were attended by 178 participants (102 consumers/patients, 43 non-prescribing healthcare workers, and 33 journalists), and were offered in community-based settings. We used our toolbox of materials to help us meet the aims of our workshop: to improve participants’ understanding of the basic concepts of evidence-based medicine and potential sources of bias in health research, to increase their confidence in their ability to understand research, and to support their use of evidence. The workshops are described in detail elsewhere (Odierna, White, Boland et al. 2010; Odierna et al. 2012). They were developed in collaboration with community and professional organizations, with the advice of key community informants, and in consultation with an academic/community partnership (Clinical and Translational Science Institute 2012). We offered the workshops through community organizations. They were co-facilitated by academics and community members.
The workshops consisted of didactic and participatory modules and varied in length from three to seven hours. We chose a problem-based “user pull” approach that makes training more relevant to learners than a systematic “push” of pre-selected topics and encourages the application of research evidence to local policy and practice (Lavis et al. 2006; Oliver et al. 2004; Dorsch et al. 1990). As a result, the workshops, subject matter, and materials varied according to our community partners’ priorities, and were designed to accommodate varied learning styles. Throughout the project, we used an iterative process to modify and add materials in response to feedback from our community partners and workshop participants.
We organized workshops around a “cycle of bias,” framework (Figure 1), which we developed to provide a strong and easily understood map of places in the research process particularly vulnerable to intentional or unintentional bias. Throughout the workshops, we asked participants to think about how research questions are framed by authors and to determine whether a study’s methodology and conduct allow it to answer its research questions. We discussed inclusion and exclusion criteria, and asked participants about whether or not results could be confidently applied to particular people or situations. We examined studies’ stated methodology, protocols, and conduct. Finally, we discussed publication, including presentation of findings, selective outcome reporting, and downstream dissemination of results in, for example, media reports and marketing campaigns. Our use of the cycle of bias perspective differs from typical critical appraisal trainings that focus primarily on methodological risks of bias and internal validity. During lectures and small group practice sessions, participants identified research questions and potential sources of intentional and unintentional bias using the PICO/PECO mnemonic (population, intervention/exposure, comparison, outcome.) They als used PICO/PECO to determine whether a study was likely to apply to particular people or situations, what treatment was being tested and what it was being compared to, and whether or not the health outcomes being studied were meaningful to them, or in the case of providers and journalists, to their patients or audience. They were encouraged to think about how study funding sources and investigators’ financial conflicts of interest might affect study design, methodology, conduct, presentation of results (e.g., absolute vs. relative risk reduction), and characterization by both a study’s authors and subsequent media reports of findings as significant, important, or widely applicable.
The toolbox includes materials that we developed or adapted in response to the concerns of our key informants and community partners. We also adapted and incorporated elements from other problem-based critical appraisal trainings (Dickersin et al. 2001; Dorsch et al. 1990; Green and Ellis 1997; Guyatt et al. 2008; Sackett et al. 2000; Shaneyfelt et al. 2006; Hadley, Wall, and Khan 2007). The limited time-availability of our community partner groups placed us under substantial constraints. We could offer half-day or full-day sessions, depending on the needs of each individual group. Half-day workshops included a single 1.5 hour small-group session. Full-day workshops included two 1.5 hour small-group sessions. Most of our community partners asked us not to pre-assign materials for their members to read because they thought this might discourage them from attending, or that they would not do the reading in advance. Nonetheless, we were committed to offering workshops in real-world settings, to people who might not otherwise have access to this type of training. Therefore, we supplied participants with tools to help them quickly identify research questions, results, and potential sources of bias and misinformation in journal articles and media reports within the time allotted in the small-group sessions:
We provided annotated handouts of scientific journal and mainstream media articles on topics of interest to our participants for use in the small-group sessions. Although we annotated and presented entire journal articles, we emphasized abstracts and sections of media reports because most of our workshop participants did not have free access to the scientific literature. Throughout the project we modified materials in response to the needs of our participants. Examples of markup excerpts, aggregated here for space, are shown in Figures 2--44 below.
We selected papers for each workshop after consulting with our community partners. We included at least one tobacco-relevant paper among the selections we offered, explaining that this was in concordance with the priorities of our funder and the research focus of several of the authors (LB, JW, and SF). Participants used these journal article markups during the experiential small-group learning portions of the workshops. We used Adobe Acrobat X Pro (Adobe Systems Incorporated) to annotate article PDFs in advance to show PICO/PECO, research questions, results, absolute and relative risk differences, conclusions, and financial disclosures. The markups were used as teaching tools and springboards for discussions. For example, we asked participants to consider how health activists, payers, and radiotherapy equipment manufacturers might characterize what authors of a publicly-funded study (Hughes et al. 2004) reported as a statistically significant but modest 1% absolute risk reduction in breast cancer recurrence in older women who received adjuvant radiotherapy, when results could also be described as a relative risk reduction of over 300%.
We also annotated and distributed “downstream” reports from various newspapers and other media sources that were based on the journal articles we used in the small group sessions, enabling the participants to see how research findings are translated into news stories and how these may change over time (Figure 4). We discussed how headlines vary following the publication of high-visibility papers. For example, in the week following the publication of an industry-funded study that reported beneficial effects of statins on a broad population of patients (Ridker et al. 2008), stories appeared in the New York Times entitled Cholesterol-Fighting Drugs Show Wider Benefit (Belluck 11-10-2008); Who Should Take a Statin? (New York Times Editorial 11-17-2008); and A Call for Caution in the Rush to Statins (Parker-Pope 11-18-2008). We provided explicit examples of how results can be reported in the press. For example, we showed how the same results showing reduced risk of adverse outcomes can be reported as relative risk reductions: “54% fewer,” “almost 50% less likely,” and absolute risk reductions: “…reduced by less than one percentage point.” We also suggested that participants compare mainstream media reporting of COI with COI disclosures in journal publications.
We distributed worksheets for participants to complete with PICO/PECO questions during the facilitated small groups, one-page summaries of study findings using the PICO/PECO format for reference, and spaces for their own comments. We provided each participant with laminated critical appraisal pocket guides, adapted from Guyatt et al. 2008, listing important skills and key points.
Small groups and discussions were co-facilitated by academic researchers and community members, some of whom did not have extensive training in research methods. We supplied all facilitators with discussion guides for the studies used in the small group sessions. These included key points in the cycle of bias for the studies.
Participants in the earliest workshops said that they needed a glossary to use during and after the workshops. We consulted the literature (Last 2001; Polit and Tatano-Beck 2008; Greenhalgh 2010; Aveyard 2007; Crombie 1996) to produce a glossary of short, understandable definitions for terms we used throughout the workshops.
We provided and discussed lists of books, articles, websites, critical appraisal guides, and online courses on critical appraisal, bias in research, and evidence-based medicine. We modified the lists according to the needs and interests of the participant groups. We gave journalists a “Conflict of Interest” tipsheet and links to an article with a juried list of independent medical/research professionals (i.e., who had not received industry funding in the previous five years) who were identified as available sources of health information (Lenzer and Brownlee 2008).
In oral and written workshop evaluations, participants praised the tools, particularly visual props, the PICO/PECO format, and tools used in hands-on learning sessions. The markups of “real papers” and “concrete examples” were most frequently named as “the best thing about today’s workshop” on evaluation forms. Comments included “Walking through the case studies and analyzing each section was very helpful to grasp the information,” “I took multiple stats and research classes and still found it helpful to have some basic idea of what is most important to look for.” Many participants spoke of the value of learning about the difference between absolute and relative risk. Several participants questioned whether our choice of materials was influenced by our funder’s focus on the harms of tobacco and second-hand smoke; we had disclosed our funding affiliation at the start of each workshop. We used these comments as a springboard for discussions about different types of COI and affiliation bias. This issue becomes ever more relevant to students of critical appraisal as funding agencies and journals increasingly require disclosure of investigators’ competing interests and appearance of conflicts of interest (U.S. Department of Health and Human Services 2011; Institute of Medicine 2009; Graf et al. 2007) and such information becomes available to readers. However, we reiterated that COI does not necessarily lead to biased findings, and pointed out that some bias is methodological and can be ascribed to error rather than misrepresentation.
Some participants said that although they found the annotated articles helpful, it was hard to fully understand papers without actually reading them through. Even so, the overall results of our workshops, reported in detail elsewhere (Odierna et al. 2012), appeared promising. Briefly, participants showed immediate and sustained significant (p< .001) improvements in knowledge (absolute increases over baseline of about 20% post-workshop; 15% at the 12-month follow-up) and confidence (absolute increases over baseline of about 29% post-workshop; 27% at follow-up). At follow-up, more than half of the participants said that they had applied workshop skills in healthcare communication and decision making as well as in critical appraisal of health information. About a third of the responses indicated that participants felt more knowledgeable and confident, while only 2% said they had not used the workshop skills or materials at all, and more than 10%, mostly journalists, used them at work. Throughout the project, and in follow-up comments, workshop participants expressed particular concern about bias related to COI, and said the workshops left them better prepared to evaluate this when they use health information.
The tools we developed for this project emphasized the focus of critical appraisal training on the risks of bias beyond those that are traditionally covered. Our cycle of bias framework encouraged participants to not only identify and evaluate studies’ methodological biases, but also potentially biasing effects of funding sources, investigator conflicts of interest, and author affiliations. The tools were also intended to help participants evaluate published study findings, consider the potential risk of selective outcome reporting, and recognize inaccuracies and spin in media reports. Moreover, we emphasized the importance of considering the composition of study populations and inclusion/exclusion criteria when deciding whether or not findings can be applied with confidence to particular patients, populations, and real-world situations. Although we did not evaluate the tools separately, the success of our workshops indicates that an adaptive approach can increase the ability of diverse groups of patients and other community-based learners to identify potential biases and critically appraise journal articles and media reporting on health interventions.
Although industry-sponsored studies may have a high risk of bias (Lundh et al. 2012; Yank, Rennie, and Bero 2007; Bero et al. 2007; Barnes and Bero 1998) and be less likely than publicly-funded studies to include minority and disadvantaged populations (Dickerson et al. 2009; Rochon et al. 1998; Rochon et al. 2004), research can vary in methodological quality, accuracy, completeness, and generalizability of results regardless of sponsorship, other COIs, or author ideology or affiliation (Resnik 1998). Therefore, it is crucial for all stakeholders to understand how to detect bias and to know where bias may be introduced in the design, conduct, and reporting of health research.
The participants in our workshops were relatively healthy people who were interested in learning how to use the best evidence to support their well being. Even before the workshops began, over 90% of them said they were concerned about COI in research. The level of interest may vary among target populations, but nonetheless there may be benefits of including training in critical appraisal of research, including appraisal of potential biases, in educational interventions to increase health literacy in other groups of learners. Trainings could be offered to patients, patient advocates, and providers who share concerns about health issues or particular medical conditions, leading to collaborative efforts for unbiased, valid evidence-based policy and practice. Educating patients and others who are potential research participants could also be beneficial in other ways. A recent systematic review (Kirkby et al. 2012) found that about half of potential participants in medical research wanted information about investigator COI. Their level of interest varied, particularly when research was considered to be high-risk, but the limited ability of potential participants to articulate their concerns or evaluate the implications of COI may hinder the effective disclosure of this information (Weinfurt et al. 2006).
Adaptable tools for teaching the detection of bias could have broad utility in online and mixed-venue courses for diverse groups of participants, including non-traditional learners. These groups could learn to critically appraise health information for validity, risk of bias, and generalizability. This has the potential to make such information accessible for use in shared healthcare decision making and health advocacy. However, to expand critical appraisal training to participants from underserved and disadvantaged populations, it will be necessary to identify resources for developing tools and curricula in close cooperation with community partners. This may require, for example, providing payment and other incentives to train community members to recruit participants and facilitate workshops.
The tools we used in our trainings were adapted for each workshop after consultation with representatives of our partner organizations. The tools evolved over the course of the project in response to participants’ suggestions. We used an individualized approach, simplifying scientific terms and mathematical concepts while maintaining the core concepts. We used subject matter chosen by our community partners. Our ability to recommend a single curriculum or complete set of tools for teaching diverse learners about sources of bias in research is limited by our adaptive approach. However, individualized modifications can make critical appraisal skills accessible for patients and other populations, and they are relatively easy to implement when conducting workshops developed from basic modular designs such as ours.
The authors gratefully acknowledge the UCSF/CTSI Community Engagement consultants for advice on partnering with community groups, Maureen Boland for research assistance, and Lisa Hirsch for proofreading and editing. We give special thanks to our community partners: the Newcomers Health Program and the San Francisco Community Clinic Consortium at the San Francisco Department of Public Health, the Ohlone Herbal Center, Breast Cancer Action, the Disability Rights Education and Defense Fund (DREDF), the Kaiser Foundation Hospital Professional Performance Committee in Hayward, CA, the California Association of Retired Americans (CARA), the UC Berkeley Graduate School of Journalism, and the Northern California Association of Healthcare Journalists.
Funding: This project was funded by the Flight Attendant Medical Research Institute (FAMRI), Miami FL USA. This project was also supported by NIH/NCRR UCSF-CTSI Grant Number UL1 RR024131. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.
Human Subjects: The project was approved by the Committee on Human Research at the University of California, San Francisco, approval numbers H2758-33589 and 10-02507.