|Home | About | Journals | Submit | Contact Us | Français|
The constant barrage of fear and hope in the health news can make your head spin. Nearly every day we hear that almost everything we do (or fail to do) leads to cancer, suffering, and death. Just as frequently (and often on the very same day) we hear about new breakthroughs, tests, and miracle drugs that may save us. Fortunately, we know the fears are usually wildly exaggerated. Unfortunately, we know the hopes usually are, too.
Where does this exaggerated fear and hope come from? Consider the recent media coverage (Table 1) of a New England Journal of Medicine article about a new cancer treatment—olaparib [a poly (ADP-ribose) polymerase—or PARP inhibitor] (1). In this phase I uncontrolled study of 60 patients with a variety of treatment refractory solid tumors, the drug appeared to have an effect in one subgroup: 12 of the 19 patients with BRCA1 or BRCA2 mutations and breast, ovarian, or prostate cancer experienced either improvement or no tumor progression (according to radiological or tumor markers) sustained for at least 4 months.
The study received prominent television coverage—ABC, CBS, and NBC national news all ran stories. The coverage was full of hope: NBC began … “some are calling this the most important cancer breakthrough of the decade” (2). But the effect of treatment was not quantified, so viewers had no way to know how well it worked. All they heard were two dramatic anecdotes about patients who did well. NBC's chief medical correspondent concluded that “these drugs look like they will eventually save thousands of lives.”
The enthusiasm in the NBC story was only tempered by a single caution: the study was small. Two fundamental cautions were missing. First, without a control group, there is no way to know if the drug accounted for the findings. Second, there is no way to know if “stable or improved radiological and tumor markers” translate into a clinically meaningful outcome such as longer life.
Of course the news is also full of fear. Reporting on a study published in the Journal (4), CNN's Sanjay Gupta told American women to worry more about cancer: “there is no level of alcohol consumption that can be considered safe when it comes to cancer” (4). That is a lot of worrying: Almost three-quarters of US adult women have consumed alcohol (5). CNN was not alone. The Today Show also covered the story (6). And The Washington Post ran a front-page article “A drink a day raises women's risk of cancer.” Unfortunately, the coverage did not provide the magnitude of the risk. Comparing the highest level of drinking (≥15 drinks a week) to the lowest (one to two drinks per week), the investigators observed a 0.6% absolute increase in the risk breast cancer diagnosis: from 2% to 2.6% over more than 7 years.
Perhaps, most importantly, none of these news stories highlighted the most fundamental limitation of this observational study: confounding. There may have been something else about the women who happened to drink alcohol—compared with those who did not—that explains the findings. If more health conscious women drank alcohol regularly (eg, one drink a day because of its purported heart benefit), it is possible that these women were more likely to adhere to regular screening mammography. So it could be that more screening—not more alcohol—explains the higher risk of breast cancer diagnosis. Journalists (and journals) should remind readers that confounding is always a threat to the validity of observational studies.
It would be easy to pin all the blame for exaggeration on journalists. After all, they have to grab their reader's (or listener’s) attention. Screaming headlines and breathless reporting come in handy. And many health journalists lack the medical or statistical training needed to appraise research critically. Curiously, many fail to approach medical research with the same skepticism they routinely apply to political reporting. Nonetheless, blaming journalists for all exaggeration would be unfair. Many health journalists (and their editors) do a great job. For example, only one major newspaper covered the New England Journal of Medicine PARP study (3). The rest passed (in our opinion the right decision). And while not perfect, this story was at least somewhat tentative (“may help … , early stage work … , preliminary”) and quantified the result (Table 1).
When it comes to exaggeration of health hazards and medical breakthroughs, there is plenty of blame to go around. Researchers contribute to the problem. Research takes years of dedication and sacrifice—investigators would not last long if they did not believe passionately in what they do. And good press can help advance careers. The combination of strong beliefs and self-interest can be an irresistible recipe for exaggeration. Although we know of no systematic examination of how investigators talk about their own research to the media, a study of press releases issued by academic medical centers found exaggeration to be common: Almost all press releases included investigator quotes—one-quarter of which overstated the importance of the findings (7).
Exaggeration, however, starts with medical journal articles themselves. This is ironic because the journals work hard to fairly represent study findings, provide context, and acknowledge important limitations.
But journals sometimes drop the ball. Important elements that journalists (and, really, all readers) need are sometimes missing or hard to find in the published articles. For example, in six high-profile journals, two-thirds of articles reporting ratio measures failed to provide the underlying absolute risks in the abstract (8) [and one-third failed to provide them anywhere in the article—despite the recommendation to do so whenever possible by the International Committee of Medical Journal Editors (9)]. Effects expressed in relative terms alone have been repeatedly shown to seem more impressive than the same effects expressed in absolute terms in studies of physicians (10,11), policy makers (12), and patients (13,14). Nor are study limitations routinely highlighted in journal abstracts (the exception is the Annals of Internal Medicine)—and sometimes they too are missing from articles altogether. All studies have limitations, which need to be highlighted to ensure that readers are aware of them and take them into account when interpreting findings.
Journal press releases—the most direct way that journals communicate with journalists—can also be a problem. Press releases issued by nine of the most prominent journals (according to the Institute for Scientific Information impact factor listings) were also missing fundamental information (15): Only half of the press releases reporting on differences between study groups provided absolute risks; less than one-quarter noted any study limitation.
Can we really expect journalists to do a better job than the medical journals, researchers, or their university public relations offices?
Medical journals can and should work harder to promote the accurate translation of research into news. The most obvious way is to make it easier for journalists to get it right: ensuring that both the journal and the corresponding press releases routinely present absolute risks found in the study (or estimated, when possible, in case–control studies) to describe the effects of interventions and to highlight study limitations.
Some journals have already taken the lead. As noted, the Annals of Internal Medicine requires a “Limitations” header in abstracts. Several journals, including the JNCI now publish a box with the editors’ take on articles: JNCI’s box is called “Contexts and Caveats.” The British Medical Journal is beginning to implement a new one-page abridged format for research articles in the print version of the journal. The format requires use of absolute risks for results and includes a mandatory header “bias, confounding, and other reasons for caution” (16). We hope all journals will adopt similar practices.
The JNCI is also determined to do more to help journalists. The Journal is launching a Web site for science and health journalists to help them “get it right” (we also think medical students, residents, practicing physicians, and of course the public will find these materials helpful).
The first posting is a set of tip sheets (Figure 1) (http://www.oxfordjournals.org/our_journals/jnci/resource/reporting_on_cancer.html) we developed for our book Know Your Chances: Understanding Health Statistics (17) and adapted for journalists attending the annual Medicine in the Media workshop (sponsored by the National Institutes of Health, The Dartmouth Institute for Health Policy and Clinical Practice, and the Department of Veterans Affairs) (18).
The first tip sheet consists of two glossaries with definitions and examples of the common numbers and statistics used in medical journals. The “Numbers Glossary” includes various ways of expressing effect sizes such as absolute risks, relative risks, and number needed to treat. The “Statistics Glossary” covers P values, confidence intervals, and statistics especially relevant to screening (survival and mortality).
The second tip sheet includes a section called “Questions to guide your reporting” to help reporters understand what study findings mean, how much they matter, and whether they might be wrong. If journalists cannot get answers to these questions, we suggest they consider skipping the story. The tip sheet also includes a section called “How to highlight study cautions,” which stresses limitations inherent in various research designs (eg, the critical limitations of the PARP study: it was uncontrolled and only measured a surrogate endpoint rather than a true health outcome; or confounding in the alcohol study). The tip sheet provides suggested language for journalists to use or adapt (rather than reinventing the wheel each time) when they write about these recurring issues.
We hope that efforts—within medical journals and those directed toward journalists—will help foster healthy skepticism in the news. Namely, setting a higher bar for covering very preliminary or inherently weak research, routinely providing data to support claims, and always highlighting study limitations. Oh yes, and approaching investigator quotes about their own work with great care. Otherwise you may read about us saying “JNCI has solved all problems with medicine in the media.”
Drs. Woloshin and Schwartz were supported by the National Cancer Institute (grant R01CA104721).
The authors’ views are their own and do not necessarily represent official positions of the Department of Veterans Affairs or the Department of Health and Human Services.