PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (521569)

Clipboard (0)
None

Related Articles

1.  A Comparison of Cost Effectiveness Using Data from Randomized Trials or Actual Clinical Practice: Selective Cox-2 Inhibitors as an Example 
PLoS Medicine  2009;6(12):e1000194.
Tjeerd-Pieter van Staa and colleagues estimate the likely cost effectiveness of selective Cox-2 inhibitors prescribed during routine clinical practice, as compared to the cost effectiveness predicted from randomized controlled trial data.
Background
Data on absolute risks of outcomes and patterns of drug use in cost-effectiveness analyses are often based on randomised clinical trials (RCTs). The objective of this study was to evaluate the external validity of published cost-effectiveness studies by comparing the data used in these studies (typically based on RCTs) to observational data from actual clinical practice. Selective Cox-2 inhibitors (coxibs) were used as an example.
Methods and Findings
The UK General Practice Research Database (GPRD) was used to estimate the exposure characteristics and individual probabilities of upper gastrointestinal (GI) events during current exposure to nonsteroidal anti-inflammatory drugs (NSAIDs) or coxibs. A basic cost-effectiveness model was developed evaluating two alternative strategies: prescription of a conventional NSAID or coxib. Outcomes included upper GI events as recorded in GPRD and hospitalisation for upper GI events recorded in the national registry of hospitalisations (Hospital Episode Statistics) linked to GPRD. Prescription costs were based on the prescribed number of tables as recorded in GPRD and the 2006 cost data from the British National Formulary. The study population included over 1 million patients prescribed conventional NSAIDs or coxibs. Only a minority of patients used the drugs long-term and daily (34.5% of conventional NSAIDs and 44.2% of coxibs), whereas coxib RCTs required daily use for at least 6–9 months. The mean cost of preventing one upper GI event as recorded in GPRD was US$104k (ranging from US$64k with long-term daily use to US$182k with intermittent use) and US$298k for hospitalizations. The mean costs (for GPRD events) over calendar time were US$58k during 1990–1993 and US$174k during 2002–2005. Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.
Conclusions
The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies. External validity should be an explicit requirement for cost-effectiveness analyses.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before a new treatment for a specific disease becomes an established part of clinical practice, it goes through a long process of development and clinical testing. This process starts with extensive studies of the new treatment in the laboratory and in animals and then moves into clinical trials. The most important of these trials are randomized controlled trials (RCTs), studies in which the efficacy and safety of the new drug and an established drug are compared by giving the two drugs to randomized groups of patients with the disease. The final hurdle that a drug or any other healthcare technology often has to jump before being adopted for widespread clinical use is a health technology assessment, which aims to provide policymakers, clinicians, and patients with information about the balance between the clinical and financial costs of the drug and its benefits (its cost-effectiveness). In England and Wales, for example, the National Institute for Health and Clinical Excellence (NICE), which promotes clinical excellence and the effective use of resources within the National Health Service, routinely commissions such assessments.
Why Was This Study Done?
Data on the risks of various outcomes associated with a new treatment are needed for cost-effectiveness analyses. These data are usually obtained from RCTs, but although RCTs are the best way of determining a drug's potency in experienced hands under ideal conditions (its efficacy), they may not be a good way to determine a drug's success in an average clinical setting (its effectiveness). In this study, the researchers compare the data from RCTs that have been used in several published cost-effectiveness analyses of a class of drugs called selective cyclooxygenase-2 inhibitors (“coxibs”) with observational data from actual clinical practice. They then ask whether the published cost-effectiveness studies, which generally used RCT data, should have been used to inform coxib prescribing policies. Coxibs are nonsteroidal anti-inflammatory drugs (NSAIDs) that were developed in the 1990s to treat arthritis and other chronic inflammatory conditions. Conventional NSAIDs can cause gastric ulcers and bleeding from the gut (upper gastrointestinal events) if taken for a long time. The use of coxibs avoids this problem.
What Did the Researchers Do and Find?
The researchers extracted data on the real-life use of conventional NSAIDs and coxibs and on the incidence of upper gastrointestinal events from the UK General Practice Research Database (GPRD) and from the national registry of hospitalizations. Only a minority of the million patients who were prescribed conventional NSAIDs (average cost per prescription US$17.80) or coxibs (average cost per prescription US$47.04) for a variety of inflammatory conditions took them on a long-term daily basis, whereas in the RCTs of coxibs, patients with a few carefully defined conditions took NSAIDs daily for at least 6–9 months. The researchers then developed a cost-effectiveness model to evaluate the costs of the alternative strategies of prescribing a conventional NSAID or a coxib. The mean additional cost of preventing one gastrointestinal event recorded in the GPRD by using a coxib instead of a NSAID, they report, was US$104,000; the mean cost of preventing one hospitalization for such an event was US$298,000. By contrast, the mean cost of preventing one gastrointestinal event by using a coxib instead of a NSAID calculated from data obtained in RCTs was about US$20,000.
What Do These Findings Mean?
These findings suggest that the published cost-effectiveness analyses of coxibs greatly underestimate the cost of preventing gastrointestinal events by replacing prescriptions of conventional NSAIDs with prescriptions of coxibs. That is, if data from actual clinical practice had been used in cost-effectiveness analyses rather than data from RCTs, the conclusions of the published cost-effectiveness analyses of coxibs would have been radically different and may have led to different prescribing guidelines for this class of drug. More generally, these findings provide a good illustration of how important it is to ensure that cost-effectiveness analyses have “external” validity by using realistic estimates for event rates and costs rather than relying on data from RCTs that do not always reflect the real-world situation. The researchers suggest, therefore, that health technology assessments should move from evaluating cost-efficacy in ideal populations with ideal interventions to evaluating cost-effectiveness in real populations with real interventions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000194.
The UK National Institute for Health Research provides information about health technology assessment
The National Institute for Health and Clinical Excellence Web site describes how this organization provides guidance on promoting good health within the England and Wales National Health Service
Information on the UK General Practice Research Database is available
Wikipedia has pages on health technology assessment and on selective cyclooxygenase-2 inhibitors (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1000194
PMCID: PMC2779340  PMID: 19997499
2.  Promotional Tone in Reviews of Menopausal Hormone Therapy After the Women's Health Initiative: An Analysis of Published Articles 
PLoS Medicine  2011;8(3):e1000425.
Adriane Fugh-Berman and colleagues analyzed a selection of published opinion pieces on hormone therapy and show that there may be a connection between receiving industry funding for speaking, consulting, or research and the tone of such opinion pieces.
Background
Even after the Women's Health Initiative (WHI) found that the risks of menopausal hormone therapy (hormone therapy) outweighed benefit for asymptomatic women, about half of gynecologists in the United States continued to believe that hormones benefited women's health. The pharmaceutical industry has supported publication of articles in medical journals for marketing purposes. It is unknown whether author relationships with industry affect promotional tone in articles on hormone therapy. The goal of this study was to determine whether promotional tone could be identified in narrative review articles regarding menopausal hormone therapy and whether articles identified as promotional were more likely to have been authored by those with conflicts of interest with manufacturers of menopausal hormone therapy.
Methods and Findings
We analyzed tone in opinion pieces on hormone therapy published in the four years after the estrogen-progestin arm of the WHI was stopped. First, we identified the ten authors with four or more MEDLINE-indexed reviews, editorials, comments, or letters on hormone replacement therapy or menopausal hormone therapy published between July 2002 and June 2006. Next, we conducted an additional search using the names of these authors to identify other relevant articles. Finally, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. Scientific accuracy was assessed based on whether or not the findings of the WHI were accurately reported using two criteria: (1) Acknowledgment or lack of denial of the risk of breast cancer diagnosis associated with hormone therapy, and (2) acknowledgment that hormone therapy did not benefit cardiovascular disease endpoints. Determination of promotional tone was based on the assessment by each reader of whether the article appeared to promote hormone therapy. Analysis of inter-rater consistency found moderate agreement for scientific accuracy (κ = 0.57) and substantial agreement for promotional tone (κ = 0.65). After discussion, readers found 86% of the articles to be scientifically accurate and 64% to be promotional in tone. Themes that were common in articles considered promotional included attacks on the methodology of the WHI, arguments that clinical trial results should not guide treatment for individuals, and arguments that observational studies are as good as or better than randomized clinical trials for guiding clinical decisions. The promotional articles we identified also implied that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Of the ten authors studied, eight were found to have declared payment for speaking or consulting on behalf of menopausal hormone manufacturers or for research support (seven of these eight were speakers or consultants). Thirty of 32 articles (90%) evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest, compared to 11 of 18 articles (61%) by those without such conflicts (p = 0.0025). Articles promoting the use of menopausal hormone therapy were 2.41 times (95% confidence interval 1.49–4.93) as likely to have been authored by authors with conflicts of interest as by authors without conflicts of interest. In articles from three authors with conflicts of interest some of the same text was repeated word-for-word in different articles.
Conclusion
There may be a connection between receiving industry funding for speaking, consulting, or research and the publication of promotional opinion pieces on menopausal hormone therapy.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Over the past three decades, menopausal hormones have been heavily promoted for preventing disease in women. However, the Women's Health Initiative (WHI) study—which enrolled more than 26,000 women in the US and which was published in 2004—found that estrogen-progestin and estrogen-only formulations (often prescribed to women around the age of menopause) increased the risk of stroke, deep vein thrombosis, dementia, and incontinence. Furthermore, this study found that the estrogen-progestin therapy increased rates of breast cancer. In fact, the estrogen-progestin arm of the WHI study was stopped in 2002 due to harmful findings, and the estrogen-only arm was stopped in 2004, also because of harmful findings. In addition, the study also found that neither therapy reduced cardiovascular risk or markedly benefited health-related quality of life measures.
Despite these results, two years after the results of WHI study were published, a survey of over 700 practicing gynecologists—the specialists who prescribe the majority of menopausal hormone therapies—in the US found that almost half did not find the findings of the WHI study convincing and that 48% disagreed with the decision to stop the trial early. Furthermore, follow-up surveys found similar results.
Why Was This Study Done?
It is unclear why gynecologists and other physicians continue to prescribe menopausal hormone therapies despite the results of the WHI. Some academics argue that published industry-funded reviews and commentaries may be designed to convey specific, but subtle, marketing messages and several academic analyses have used internal industry documents disclosed in litigation cases. So this study was conducted to investigate whether hormone therapy–promoting tone could be identified in narrative review articles and if so, whether these articles were more likely to have been authored by people who had accepted funding from hormone manufacturers.
What Did the Researchers Do and Find?
The researchers conducted a comprehensive literature search that identified 340 relevant articles published between July 2002 and June 2006—the four years following the cessation of the estrogen-progestin arm of the women's health initiative study. Ten authors had published four to six articles, 47 authored two or three articles, and 371 authored one article each. The researchers focused on authors who had published four or more articles in the four-year period under study and, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. After individually analyzing a batch of articles, the readers met to provide their initial assessments, to discuss them, and to reach consensus on tone and scientific accuracy. Then after the papers were evaluated, each author was identified and the researchers searched for authors' potential financial conflicts of interest, defined as publicly disclosed information that the authors had received payment for research, speaking, or consulting on behalf of a manufacturer of menopausal hormone therapy.
Common themes in the 50 articles included arguments that clinical trial results should not guide treatment for individuals and suggestions that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Furthermore, of the ten authors studied, eight were found to have received payment for research, speaking or consulting on behalf of menopause hormone manufacturers, and 30 of 32 articles evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest. Articles promoting the use of menopausal hormone therapy were more than twice as likely to have been written by authors with conflicts of interest as by authors without conflicts of interest. Furthermore, Three authors who were identified as having financial conflicts of interest were authors on articles where sections of their previously published articles were repeated word-for-word without citation.
What Do These Findings Mean?
The findings of this study suggest that there may be a link between receiving industry funding for speaking, consulting, or research and the publication of apparently promotional opinion pieces on menopausal hormone therapy. Furthermore, such publications may encourage physicians to continue prescribing these therapies to women of menopausal age. Therefore, physicians and other health care providers should interpret the content of review articles with caution. In addition, medical journals should follow the International Committee of Medical Journal Editors Uniform Requirements for Manuscripts, which require that all authors submit signed statements of their participation in authorship and full disclosure of any conflicts of interest.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000425.
The US National Heart, Lung, and Blood Institute has more information on the Womens Health Initiative
The US National Institutes of Health provide more information about the effects of menopausal hormone replacement therapy
The Office of Women's Health, U.S. Department of Health and Human Services provides information on menopausal hormone therapy
The International Committee of Medical Journal Editors Uniform Requirements for Manuscripts presents Uniform Requirements for Manuscripts published in biomedical journals
The National Womens Health Network, a consumer advocacy group that takes no industry money, has factsheets and articles about menopausal hormone therapy
PharmedOut, a Georgetown University Medical Center project, has many resources on pharmaceutical marketing practices
doi:10.1371/journal.pmed.1000425
PMCID: PMC3058057  PMID: 21423581
3.  Selection in Reported Epidemiological Risks: An Empirical Assessment 
PLoS Medicine  2007;4(3):e79.
Background
Epidemiological studies may be subject to selective reporting, but empirical evidence thereof is limited. We empirically evaluated the extent of selection of significant results and large effect sizes in a large sample of recent articles.
Methods and Findings
We evaluated 389 articles of epidemiological studies that reported, in their respective abstracts, at least one relative risk for a continuous risk factor in contrasts based on median, tertile, quartile, or quintile categorizations. We examined the proportion and correlates of reporting statistically significant and nonsignificant results in the abstract and whether the magnitude of the relative risks presented (coined to be consistently ≥1.00) differs depending on the type of contrast used for the risk factor. In 342 articles (87.9%), ≥1 statistically significant relative risk was reported in the abstract, while only 169 articles (43.4%) reported ≥1 statistically nonsignificant relative risk in the abstract. Reporting of statistically significant results was more common with structured abstracts, and was less common in US-based studies and in cancer outcomes. Among 50 randomly selected articles in which the full text was examined, a median of nine (interquartile range 5–16) statistically significant and six (interquartile range 3–16) statistically nonsignificant relative risks were presented (p = 0.25). Paradoxically, the smallest presented relative risks were based on the contrasts of extreme quintiles; on average, the relative risk magnitude was 1.41-, 1.42-, and 1.36-fold larger in contrasts of extreme quartiles, extreme tertiles, and above-versus-below median values, respectively (p < 0.001).
Conclusions
Published epidemiological investigations almost universally highlight significant associations between risk factors and outcomes. For continuous risk factors, investigators selectively present contrasts between more extreme groups, when relative risks are inherently lower.
An evaluation of published articles reporting epidemiological studies found that they almost universally highlight significant associations between risk factors and outcomes.
Editors' Summary
Background.
Medical and scientific researchers use statistical tests to try to work out whether their observations—for example, seeing a difference in some characteristic between two groups of people—might have occurred as a result of chance alone. Statistical tests cannot determine this for sure, rather they can only give a probability that the observations would have arisen by chance. When researchers have many different hypotheses, and carry out many statistical tests on the same set of data, they run the risk of concluding that there are real differences where in fact there are none. At the same time, it has long been known that scientific and medical researchers tend to pick out the findings on which to report in their papers. Findings that are more interesting, impressive, or statistically significant are more likely to be published. This is termed “publication bias” or “selective reporting bias.” Therefore, some people are concerned that the published scientific literature might contain many false-positive findings, i.e., findings that are not true but are simply the result of chance variation in the data. This would have a serious impact on the accuracy of the published scientific literature and would tend to overestimate the strength and direction of relationships being studied.
Why Was This Study Done?
Selective reporting bias has already been studied in detail in the area of randomized trials (studies where participants are randomly allocated to receive an intervention, e.g., a new drug, versus an alternative intervention or “comparator,” in order to understand the benefits or safety of the new intervention). These studies have shown that very many of the findings of trials are never published, and that statistically significant findings are more likely to be included in published papers than nonsignificant findings. However, much medical research is carried out that does not use randomized trial methods, either because that method is not useful to answer the question at hand or is unethical. Epidemiological research is often concerned with looking at links between risk factors and the development of disease, and this type of research would generally use observation rather than experiment to uncover connections. The researchers here were concerned that selective reporting bias might be just as much of a problem in epidemiological research as in randomized trials research, and wanted to study this specifically.
What Did the Researchers Do and Find?
In this investigation, searches were carried out of PubMed, a database of biomedical research studies, to extract epidemiological studies that were published between January 2004 and October 2005. The researchers wanted to specifically look at studies reporting the effect of continuous risk factors and their effect on health or disease outcomes (a continuous risk factor is something like age or glucose concentration in the blood, is a number, and can have any value on a sliding scale). Three hundred and eighty-nine original research studies were found, and the researchers pulled out from the abstracts and full text of these papers the relative risks that were reported along with the results of statistical tests for them. (Relative risk is the chance of getting an outcome, say disease, in one group as compared to another group.) The researchers found that nearly 90% of these studies had one or more statistically significant risks reported in the abstract, but only 43% reported one or more risks that were not statistically significant. When looking at all of the findings reported anywhere in the full text for 50 of these studies, the researchers saw that papers overall reported more statistically significant risks than nonsignificant risks. Finally, it seemed that in the set of papers studied here, the way in which statistical analyses were done produced a bias towards more extreme findings: for datasets showing small relative risks, papers were more likely to report a comparison between extreme subsets of the data so as to report larger relative risks.
What Do These Findings Mean?
These findings suggest that there is a tendency among epidemiology researchers to highlight statistically significant findings and to avoid highlighting nonsignificant findings in their research papers. This behavior may be a problem, because many of these significant findings could in future turn out to be “false positives.” At present, registers exist for researchers to describe ongoing clinical trials, and to set out the outcomes that they plan to analyze for those trials. These registers will go some way towards addressing some of the problems described here, but only for clinical trials research. Registers do not yet exist for epidemiological studies, and therefore it is important that researchers and readers are aware of and cautious about the problem of selective reporting in epidemiological research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040079.
Wikipedia entry on publication bias (note: Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors gives guidelines for submitting manuscripts to its member journals, and includes comments about registration of ongoing studies and the obligation to publish negative studies
ClinicalTrials.gov and the ISRCTN register are two registries of ongoing clinical trials
doi:10.1371/journal.pmed.0040079
PMCID: PMC1808481  PMID: 17341129
4.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Background
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
Background
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001308.
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
doi:10.1371/journal.pmed.1001308
PMCID: PMC3439420  PMID: 22984354
5.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570
6.  Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis 
PLoS Medicine  2009;6(9):e1000144.
Joseph Ross and colleagues examine publication rates of clinical trials and find low rates of publication even following registration in Clinicaltrials.gov.
Background
ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication.
Methods and Findings
We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials (n = 7,515), nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half (311 of 677, 46%) of trials were published, among which 96 (31%) provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry (40%, 144 of 357) were less likely to be published when compared with nonindustry/nongovernment sponsored trials (56%, 110 of 198; p<0.001), but there was no significant difference when compared with government sponsored trials (47%, 57 of 122; p = 0.22). Among trials that reported an end date, 75 of 123 (61%) completed prior to 2004, 50 of 96 (52%) completed during 2004, and 62 of 149 (42%) completed during 2005 were published (p = 0.006).
Conclusions
Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that whenever they are ill, health care professionals will make sure they get the best available treatment. But how do clinicians know which treatment is most appropriate? In the past, clinicians used their own experience to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of the results of clinical trials, studies that investigate the efficacy and safety of medical interventions in people. However, evidence-based medicine can only be effective if all the results from clinical trials are published promptly in medical journals. Unfortunately, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished or only appear in the public domain many years after the drug has been approved for clinical use by the US Food and Drug Administration (FDA) and other governmental bodies.
Why Was This Study Done?
The extent of this “selective” publication, which can impair evidence-based clinical practice, remains unclear but is thought to be substantial. In this study, the researchers investigate the problem of selective publication by systematically examining the extent of publication of the results of trials registered in ClinicalTrials.gov, a Web-based registry of US and international clinical trials. ClinicalTrials.gov was established in 2000 by the US National Library of Medicine in response to the 1997 FDA Modernization Act. This act required preregistration of all trials of new drugs to provide the public with information about trials in which they might be able to participate. Mandatory data elements for registration in ClinicalTrials.gov initially included the trial's title, the condition studied in the trial, the trial design, and the intervention studied. In September 2007, the FDA Amendments Act expanded the mandatory requirements for registration in ClinicalTrials.gov by making it necessary, for example, to report the trial start date and to report primary and secondary outcomes (the effect of the intervention on predefined clinical measurements) in the registry within 2 years of trial completion.
What Did the Researchers Do and Find?
The researchers identified 7,515 trials that were registered within ClinicalTrials.gov after December 31, 1999 (excluding phase I, safety trials), and whose record indicated trial completion by June 8, 2007. Most of these trials reported all the mandatory data elements that were required by ClinicalTrials.gov before the FDA Amendments Act but reporting of optional data elements was less complete. For example, only two-thirds of the trials reported their primary outcome. Next, the researchers randomly selected 10% of the trials and, after excluding trials whose completion date was after December 31, 2005 (to allow at least two years for publication), determined the publication status of this subsample by systematically searching MEDLINE (an online database of articles published in selected medical and scientific journals). Fewer than half of the trials in the subsample had been published, and the citation for only a third of these publications had been entered into ClinicalTrials.gov. Only 40% of industry-sponsored trials had been published compared to 56% of nonindustry/nongovernment-sponsored trials, a difference that is unlikely to have occurred by chance. Finally, 61% of trials with a completion date before 2004 had been published, but only 42% of trials completed during 2005 had been published.
What Do These Findings Mean?
These findings indicate that, over the period studied, critical trial information was not included in the ClinicalTrials.gov registry. The FDA Amendments Act should remedy some of these shortcomings but only if the accuracy and completeness of the information in ClinicalTrials.gov is carefully monitored. These findings also reveal that registration in ClinicalTrials.gov does not guarantee that trial results will appear in a timely manner in the scientific literature. However, they do not address the reasons for selective publication (which may be, in part, because it is harder to publish negative results than positive results), and they are potentially limited by the methods used to discover whether trial results had been published. Nevertheless, these findings suggest that the FDA, trial sponsors, and the scientific community all need to make a firm commitment to minimize the selective publication of trial results to ensure that patients and clinicians have access to the information they need to make fully informed treatment decisions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000144.
PLoS Medicine recently published two related articles on selected publication by Ida Sim and colleagues and by Lisa Bero and colleagues and an editorial discussing the FDA Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The US Food and Drug Administration provides further information about drug approval in the US for consumers and health care professionals
doi:10.1371/journal.pmed.1000144
PMCID: PMC2728480  PMID: 19901971
7.  Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data 
PLoS Medicine  2013;10(10):e1001526.
Beate Wieseler and colleagues compare the completeness of reporting of patient-relevant clinical trial outcomes between clinical study reports and publicly available data.
Please see later in the article for the Editors' Summary
Background
Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments.
Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs.
Methods and Findings
We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as “completely reported” or “incompletely reported.” For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall.
We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes.
The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs.
Conclusions
In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health care professionals will ensure that they get the best available treatment. In the past, clinicians used their own experience to make decisions about which treatments to offer their patients, but nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical trials, studies that investigate the benefits and harms of drugs and other medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results of clinical research are available for evaluation. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Both types of bias pose a substantial threat to informed medical decision-making.
Why Was This Study Done?
Recent initiatives, such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a precondition for publication in medical journals, aim to prevent these biases but are imperfect. Another way to facilitate the unbiased evaluation of clinical research might be to increase access to clinical study reports (CSRs)—detailed but generally unpublished accounts of clinical trials. Notably, information from CSRs was recently used to challenge conclusions based on published evidence about the efficacy and safety of the antiviral drug oseltamivir and the antidepressant reboxetine. In this study, the researchers compare the information available in CSRs and in publicly available sources (journal publications and registry reports) for the patient-relevant outcomes included in 16 health technology assessments (HTAs; analyses of the medical implications of the use of specific medical technologies) for drugs; the HTAs were prepared by the Institute for Quality and Efficiency in Health Care (IQWiG), Germany's main HTA agency.
What Did the Researchers Do and Find?
The researchers searched for published journal articles and registry reports for each of 101 trials for which the IQWiG had requested and received full CSRs from drug manufacturers during HTA preparation. They then assessed the completeness of information on the patient-relevant benefit and harm outcomes (for example symptom relief and adverse effects, respectively) included in each document type. Eighty-six of the included trials had at least one publicly available data source; the results of 15% of the trials were not available in either journals or registry reports. Overall, the CSRs provided complete information on 86% of the patient-related outcomes, whereas the combined publicly available sources provided complete information on only 39% of the outcomes. For individual outcomes, the CSRs provided complete information on 78%–100% of the benefit outcomes, with the exception of health-related quality of life (57%); combined publicly available sources provided complete information on 20%–53% of these outcomes. The CSRs also provided more information on patient-relevant harm outcomes than the publicly available sources.
What Do These Findings Mean?
These findings show that, for the clinical trials considered here, publicly available sources provide much less information on patient-relevant outcomes than CSRs. The generalizability of these findings may be limited, however, because the trials included in this study are not representative of all trials. Specifically, only CSRs that were voluntarily provided by drug companies were assessed, a limited number of therapeutic areas were covered by the trials, and the trials investigated only drugs. Nevertheless, these findings suggest that access to CSRs is important for the unbiased evaluation of clinical trials and for informed decision-making in health care. Notably, in June 2013, the European Medicines Agency released a draft policy calling for the proactive publication of complete clinical trial data (possibly including CSRs). In addition, the European Union and the European Commission are considering legal measures to improve the transparency of clinical trial data. Both these initiatives will probably only apply to drugs that are approved after January 2014, however, and not to drugs already in use. The researchers therefore call for CSRs to be made publicly available for both past and future trials, a recommendation also supported by the AllTrials initiative, which is campaigning for all clinical trials to be registered and fully reported.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001526.
Wikipedia has pages on evidence-based medicine, publication bias, and health technology assessment (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union, and guidance on the preparation of clinical study reports; its draft policy on the release of data from clinical trials is available
Information about IQWiG is available (in English and German); Informed Health Online is a website provided by IQWiG that provides objective, independent, and evidence-based information for patients (also in English and German)
doi:10.1371/journal.pmed.1001526
PMCID: PMC3793003  PMID: 24115912
8.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
9.  Persistence with Statins and Onset of Rheumatoid Arthritis: A Population-Based Cohort Study 
PLoS Medicine  2010;7(9):e1000336.
In a retrospective cohort study, Gabriel Chodick and colleagues find a significant association between persistence with statin therapy and reduced risk of developing rheumatoid arthritis, but only a modest decrease in risk of osteoarthritis.
Background
The beneficial effects of statins in rheumatoid arthritis (RA) have been suggested previously, but it is unclear whether statins may prevent its development. The aim of this retrospective cohort study was to explore whether persistent use of statins is associated with onset of RA.
Methods and Findings
The computerized medical databases of a large health organization in Israel were used to identify diagnosed RA cases among adults who began statin therapy between 1998 and 2007. Persistence with statins was assessed by calculating the mean proportion of follow-up days covered (PDC) with statins for every study participant. To assess the possible effects of healthy user bias, we also examined the risk of osteoarthritis (OA), a common degenerative joint disease that is unlikely to be affected by use of statins.
A total of 211,627 and 193,770 individuals were eligible for the RA and OA cohort analyses, respectively. During the study follow-up period, there were 2,578 incident RA cases (3.07 per 1,000 person-years) and 17,878 incident OA cases (24.34 per 1,000 person-years). The crude incidence density rate of RA among nonpersistent patients (PDC level of <20%) was 51% higher (3.89 per 1,000 person-years) compared to highly persistent patients who were covered with statins for at least 80% of the follow-up period. After adjustment for potential confounders, highly persistent patients had a hazard ratio of 0.58 (95% confidence interval 0.52–0.65) for RA compared with nonpersistent patients. Larger differences were observed in younger patients and in patients initiating treatment with high efficacy statins. In the OA cohort analysis, high persistence with statins was associated only with a modest decrement in risk ratio (hazard ratio = 0.85; 0.81–0.88) compared to nonadherent patients.
Conclusions
The present study demonstrates an association between persistence with statin therapy and reduced risk of developing RA. The relationship between continuation of statin use and OA onset was weak and limited to patients with short-term follow-up.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The role of statins in the management of diseases that have an inflammatory component is unclear. There is some evidence that statins may have anti-inflammatory and immunumodulatory properties, demonstrated by reducing the level of C-reactive protein that may play an important role in chronic inflammatory diseases, such as rheumatoid arthritis—a chronic condition that is a major cause of disability. Some small studies have suggested a modest effect of statins in decreasing disease activity in patients with rheumatoid arthritis, but a recent larger study involving over 30,000 patients with rheumatoid arthritis showed no beneficial effect. Furthermore, it has been suggested that statins may have a role in the primary prevention of rheumatoid arthritis, but so far there has been no solid evidence base to support this hypothesis. Before statins can potentially be included in the treatment options for rheumatoid arthritis, or possibly prescribed for the prevention of this condition, there needs to be a much stronger evidence base, such as larger studies with longer follow-up periods, which clearly demonstrates any significant clinical benefits of statin use.
Why Was This Study Done?
This large study (more than 200,000 patients) with a long follow-up period (average of 10 years) was conducted to discover whether there was any kind of association between persistent use of statins and the onset of rheumatoid arthritis.
What Did the Researchers Do and Find?
The researchers conducted a retrospective cohort study among the members of Maccabi Healthcare Services (a health maintenance organization [HMO]) in Israel, which has 1.8-million enrollees and covers every section of the Israeli population, to identify statin users who were at least 18 years of age and did not have RA or a related disease at study entry. The cohort covered the period 1998–2007 and included members who were continuously enrolled in the HMO from 1995 to 1998. The researchers then analyzed the incidence of newly diagnosed rheumatoid arthritis, recording the date of first diagnostic codes (International Classification of Diseases, 9th revision [ICD-9]) associated with rheumatoid arthritis during the study follow-up period. To assess any potential effects of “healthy adherer” bias (good adherence to medication in patients with a chronic illness may be more likely to lead to better health and improved survival), the researchers also examined any possible association between persistent statin use and the development of osteoarthritis, a common degenerative joint disease that is unlikely to be affected by statin use.
During the study follow-up period, there were 2,578 incident cases of rheumatoid arthritis and 17,878 incident cases of osteoarthritis. The crude incidence density rate of rheumatoid arthritis among patients who did not persistently take statins was 51% higher than that of patients who used statins for at least 80% of the follow-up period. Furthermore, patients who persistently used statins had a risk ratio of 0.58 for rheumatoid arthritis compared with patients who did not persistently use statins. In the osteoarthritis cohort analysis, high persistence with statin use was associated with a modest decrement in risk ratio (0.85) compared to patients who did not persist with statins.
What Do These Findings Mean?
This study suggests that there is an association between persistence with statin therapy and reduced risk of developing rheumatoid arthritis. Although the researchers took into account the possibility of healthy adherer bias (by comparing results with the osteoarthritis cohort), this study has other limitations, such as the retrospective design, and the nonrandomization of statin use, which could affect the interpretation of the results. However, the observed associations were greater than those that would be expected from methodological biases alone. Larger, systematic, controlled, prospective studies with high efficacy statins, particularly in younger adults who are at increased risk for rheumatoid arthritis, are needed to confirm these findings and to clarify the exact nature of the biological relationship between adherence to statin therapy and the incidence of rheumatoid arthritis.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000336.
Arthritis Research UK provides a wide range of information on arthritis research
The American College of Rheumatology provides information on rheumatology research
Patient information on rheumatoid arthritis is available at Patient UK
Extensive information about statins is available at statin answers
doi:10.1371/journal.pmed.1000336
PMCID: PMC2935457  PMID: 20838658
10.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
11.  The Chilling Effect: How Do Researchers React to Controversy? 
PLoS Medicine  2008;5(11):e222.
Background
Can political controversy have a “chilling effect” on the production of new science? This is a timely concern, given how often American politicians are accused of undermining science for political purposes. Yet little is known about how scientists react to these kinds of controversies.
Methods and Findings
Drawing on interview (n = 30) and survey data (n = 82), this study examines the reactions of scientists whose National Institutes of Health (NIH)-funded grants were implicated in a highly publicized political controversy. Critics charged that these grants were “a waste of taxpayer money.” The NIH defended each grant and no funding was rescinded. Nevertheless, this study finds that many of the scientists whose grants were criticized now engage in self-censorship. About half of the sample said that they now remove potentially controversial words from their grant and a quarter reported eliminating entire topics from their research agendas. Four researchers reportedly chose to move into more secure positions entirely, either outside academia or in jobs that guaranteed salaries. About 10% of the group reported that this controversy strengthened their commitment to complete their research and disseminate it widely.
Conclusions
These findings provide evidence that political controversies can shape what scientists choose to study. Debates about the politics of science usually focus on the direct suppression, distortion, and manipulation of scientific results. This study suggests that scholars must also examine how scientists may self-censor in response to political events.
Drawing on interview and survey data, Joanna Kempner's study finds that political controversies shape what many scientists choose not to study.
Editors' Summary
Background.
Scientific research is an expensive business and, inevitably, the organizations that fund this research—governments, charities, and industry—play an important role in determining the directions that this research takes. Funding bodies can have both positive and negative effects on the acquisition of scientific knowledge. They can pump money into topical areas such as the human genome project. Alternatively, by withholding funding, they can discourage some types of research. So, for example, US federal funds cannot be used to support many aspects of human stem cell research. “Self-censoring” by scientists can also have a negative effect on scientific progress. That is, some scientists may decide to avoid areas of research in which there are many regulatory requirements, political pressure, or in which there is substantial pressure from advocacy groups. A good example of this last type of self-censoring is the withdrawal of many scientists from research that involves certain animal models, like primates, because of animal rights activists.
Why Was This Study Done?
Some people think that political controversy might also encourage scientists to avoid some areas of scientific inquiry, but no studies have formally investigated this possibility. Could political arguments about the value of certain types of research influence the questions that scientists pursue? An argument of this sort occurred in the US in 2003 when Patrick Toomey, who was then a Republican Congressional Representative, argued that National Institutes of Health (NIH) grants supporting research into certain aspects of sexual behavior were “much less worthy of taxpayer funding” than research on “devastating diseases,” and proposed an amendment to the 2004 NIH appropriations bill (which regulates the research funded by NIH). The Amendment was rejected, but more than 200 NIH-funded grants, most of which examined behaviors that affect the spread of HIV/AIDS, were internally reviewed later that year; NIH defended each grant, so none were curtailed. In this study, Joanna Kempner investigates how the scientists whose US federal grants were targeted in this clash between politics and science responded to the political controversy.
What Did the Researchers Do and Find?
Kempner interviewed 30 of the 162 principal investigators (PIs) whose grants were reviewed. She asked them to describe their research, the grants that were reviewed, and their experience with NIH before, during, and after the controversy. She also asked them whether this experience had changed their research practice. She then used the information from these interviews to design a survey that she sent to all the PIs whose grants had been reviewed; 82 responded. About half of the scientists interviewed and/or surveyed reported that they now remove “red flag” words (for example, “AIDS” and “homosexual”) from the titles and abstracts of their grant applications. About one-fourth of the respondents no longer included controversial topics (for example, “abortion” and “emergency contraception”) in their research agendas, and four researchers had made major career changes as a result of the controversy. Finally, about 10% of respondents said that their experience had strengthened their commitment to see their research completed and its results published although even many of these scientists also engaged in some self-censorship.
What Do These Findings Mean?
These findings show that, even though no funding was withdrawn, self-censoring is now common among the scientists whose grants were targeted during this particular political controversy. Because this study included researchers in only one area of health research, its findings may not be generalizable to other areas of research. Furthermore, because only half of the PIs involved in the controversy responded to the survey, these findings may be affected by selection bias. That is, the scientists most anxious about the effects of political controversy on their research funding (and thus more likely to engage in self-censorship) may not have responded. Nevertheless, these findings suggest that the political environment might have a powerful effect on self-censorship by scientists and might dissuade some scientists from embarking on research projects that they would otherwise have pursued. Further research into what Kempner calls the “chilling effect” of political controversy on scientific research is now needed to ensure that a healthy balance can be struck between political involvement in scientific decision making and scientific progress.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050222.
The Consortium of Social Science Associations, an advocacy organization that provides a bridge between the academic research community and Washington policymakers, has more information about the political controversy initiated by Patrick Toomey
Some of Kempner's previous research on self-censorship by scientists is described in a 2005 National Geographic news article
doi:10.1371/journal.pmed.0050222
PMCID: PMC2586361  PMID: 19018657
12.  Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles 
PLoS Medicine  2007;4(1):e5.
Background
Industrial support of biomedical research may bias scientific conclusions, as demonstrated by recent analyses of pharmaceutical studies. However, this issue has not been systematically examined in the area of nutrition research. The purpose of this study is to characterize financial sponsorship of scientific articles addressing the health effects of three commonly consumed beverages, and to determine how sponsorship affects published conclusions.
Methods and Findings
Medline searches of worldwide literature were used to identify three article types (interventional studies, observational studies, and scientific reviews) about soft drinks, juice, and milk published between 1 January, 1999 and 31 December, 2003. Financial sponsorship and article conclusions were classified by independent groups of coinvestigators. The relationship between sponsorship and conclusions was explored by exact tests and regression analyses, controlling for covariates. 206 articles were included in the study, of which 111 declared financial sponsorship. Of these, 22% had all industry funding, 47% had no industry funding, and 32% had mixed funding. Funding source was significantly related to conclusions when considering all article types (p = 0.037). For interventional studies, the proportion with unfavorable conclusions was 0% for all industry funding versus 37% for no industry funding (p = 0.009). The odds ratio of a favorable versus unfavorable conclusion was 7.61 (95% confidence interval 1.27 to 45.73), comparing articles with all industry funding to no industry funding.
Conclusions
Industry funding of nutrition-related scientific articles may bias conclusions in favor of sponsors' products, with potentially significant implications for public health.
In 111 scientific articles on nonalcoholic beverages, articles with all industry funding were more than 7 times more likely to have favorable conclusions compared with articles with no industry funding.
Editors' Summary
Background.
Much of the money available for doing medical research comes from companies, as opposed to government agencies or charities. There is some evidence that when a research study is sponsored by an organization that has a financial interest in the outcome, the study is more likely to produce results that favor the funder (this is called “sponsorship bias”). This phenomenon is worrying, because if our knowledge about effectiveness and safety of medicines is based on biased findings, patients could suffer. However, it is not clear whether sponsorship bias extends beyond research into drugs, but also affects other types of research that is in the public interest. For example, research into the health benefits, or otherwise, of different types of food and drink may affect government guidelines, regulations, and the behavior patterns of members of the public. Were sponsorship bias also to exist in this area of research, the health of the wider public could be affected.
Why Was This Study Done?
There is not a great deal of evidence about whether sponsorship bias affects nutritional research (scientific studies that look at the relationship between food and/or drink, and health or disease states). Therefore, the group of researchers here set out to collect information from published nutritional research papers, to see if the type of sponsorship for the research studies was in any way linked with whether the main conclusions were favorable or unfavorable to the sponsor.
What Did the Researchers Do and Find?
The research study reported here used the scientific literature as a source of data. The researchers chose to examine one particular area of nutrition (nonalcoholic drinks including soft drinks, juices, and milk), so that their investigation would not be affected too much by variability between the different types of nutritional research. Using literature searches, the researchers identified all original research and scientific review articles published between January 1999 and December 2003 that examined soft drinks, juices, and milk; described research carried out in humans; and at the same time drew conclusions relevant to health or disease. Then, information from each published article was categorized: the conclusions were coded as either favorable, unfavorable, or neutral in relation to the health effects of the products being studied, and the article's funding was coded as either all industry (ie, food/drinks companies), no industry, or mixed. 206 published articles were analyzed and only 54% declared funding. The researchers found that, overall, there was a strong association between the type of funding available for these articles and the conclusions that were drawn. Articles sponsored exclusively by food/drinks companies were four to eight times more likely to have conclusions favorable to the financial interests of the sponsoring company than articles which were not sponsored by food or drinks companies.
What Do These Findings Mean?
These findings suggest that a high potential for bias exists in research into the health benefits or harms of nonalcoholic drinks. It is not clear from this research study why or how this bias comes about, but there are many different mechanisms that might cause it. The researchers suggest that certain initiatives might help to reduce bias, for example, increasing independent funding of nutrition research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040005.
Conflict of Interest definition from Wikipedia (Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors provides standard guidelines for practices at medical journals, including a section on sponsorship, authorship, and accountability
The Committee on Publication Ethics is a forum for journal editors to discuss issues related to the integrity of the scientific record, and it provides guidelines for editors and case studies for reference
The Good Publication Practice guidelines outline standards for responsible publication of research sponsored by pharmaceutical companies
doi:10.1371/journal.pmed.0040005
PMCID: PMC1764435  PMID: 17214504
13.  Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments 
PLoS Medicine  2013;10(7):e1001489.
Background
The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address.
Methods and Findings
We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria—most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation.
Conclusions
By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The development process for new drugs is lengthy and complex. It begins in the laboratory, where scientists investigate the causes of diseases and identify potential new treatments. Next, promising interventions undergo preclinical research in cells and in animals (in vivo animal experiments) to test whether the intervention has the expected effect and to support the generalization (extension) of this treatment–effect relationship to patients. Drugs that pass these tests then enter clinical trials, where their safety and efficacy is tested in selected groups of patients under strictly controlled conditions. Finally, the government bodies responsible for drug approval review the results of the clinical trials, and successful drugs receive a marketing license, usually a decade or more after the initial laboratory work. Notably, only 11% of agents that enter clinical testing (investigational drugs) are ultimately licensed.
Why Was This Study Done?
The frequent failure of investigational drugs during clinical translation is potentially harmful to trial participants. Moreover, the costs of these failures are passed onto healthcare systems in the form of higher drug prices. It would be good, therefore, to reduce the attrition rate of investigational drugs. One possible explanation for the dismal success rate of clinical translation is that preclinical research, the key resource for justifying clinical development, is flawed. To address this possibility, several groups of preclinical researchers have issued guidelines intended to improve the design and execution of in vivo animal studies. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the authors identify the experimental practices that are commonly recommended in these guidelines and organize these recommendations according to the type of threat to validity (internal, construct, or external) that they address. Internal threats to validity are factors that confound reliable inferences about treatment–effect relationships in preclinical research. For example, experimenter expectation may bias outcome assessment. Construct threats to validity arise when researchers mischaracterize the relationship between an experimental system and the clinical disease it is intended to represent. For example, researchers may use an animal model for a complex multifaceted clinical disease that only includes one characteristic of the disease. External threats to validity are unseen factors that frustrate the transfer of treatment–effect relationships from animal models to patients.
What Did the Researchers Do and Find?
The researchers identified 26 preclinical guidelines that met their predefined eligibility criteria. Twelve guidelines addressed preclinical research for neurological and cerebrovascular drug development; other disorders covered by guidelines included cardiac and circulatory disorders, sepsis, pain, and arthritis. Together, the guidelines offered 55 different recommendations for the design and execution of preclinical in vivo animal studies. Nineteen recommendations addressed threats to internal validity. The most commonly included recommendations of this type called for the use of power calculations to ensure that sample sizes are large enough to yield statistically meaningful results, random allocation of animals to treatment groups, and “blinding” of researchers who assess outcomes to treatment allocation. Among the 25 recommendations that addressed threats to construct validity, the most commonly included recommendations called for characterization of the properties of the animal model before experimentation and matching of the animal model to the human manifestation of the disease. Finally, six recommendations addressed threats to external validity. The most commonly included of these recommendations suggested that preclinical research should be replicated in different models of the same disease and in different species, and should also be replicated independently.
What Do These Findings Mean?
This systematic review identifies a range of investigational recommendations that preclinical researchers believe address threats to the validity of preclinical efficacy studies. Many of these recommendations are not widely implemented in preclinical research at present. Whether the failure to implement them explains the frequent discordance between the results on drug safety and efficacy obtained in preclinical research and in clinical trials is currently unclear. These findings provide a starting point, however, for the improvement of existing preclinical research guidelines for specific diseases, and for the development of similar guidelines for other diseases. They also provide an evidence-based platform for the analysis of preclinical evidence and for the study and evaluation of preclinical research practice. These findings should, therefore, be considered by investigators, institutional review bodies, journals, and funding agents when designing, evaluating, and sponsoring translational research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001489.
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health professionals; its Patient Network provides a step-by-step description of the drug development process that includes information on preclinical research
The UK Medicines and Healthcare Products Regulatory Agency (MHRA) provides information about all aspects of the scientific evaluation and approval of new medicines in the UK; its My Medicine: From Laboratory to Pharmacy Shelf web pages describe the drug development process from scientific discovery, through preclinical and clinical research, to licensing and ongoing monitoring
The STREAM website provides ongoing information about policy, ethics, and practices used in clinical translation of new drugs
The CAMARADES collaboration offers a “supporting framework for groups involved in the systematic review of animal studies” in stroke and other neurological diseases
doi:10.1371/journal.pmed.1001489
PMCID: PMC3720257  PMID: 23935460
14.  Epidemiology and Reporting Characteristics of Systematic Reviews 
PLoS Medicine  2007;4(3):e78.
Background
Systematic reviews (SRs) have become increasingly popular to a wide range of stakeholders. We set out to capture a representative cross-sectional sample of published SRs and examine them in terms of a broad range of epidemiological, descriptive, and reporting characteristics, including emerging aspects not previously examined.
Methods and Findings
We searched Medline for SRs indexed during November 2004 and written in English. Citations were screened and those meeting our inclusion criteria were retained. Data were collected using a 51-item data collection form designed to assess the epidemiological and reporting details and the bias-related aspects of the reviews. The data were analyzed descriptively. In total 300 SRs were identified, suggesting a current annual publication rate of about 2,500, involving more than 33,700 separate studies including one-third of a million participants. The majority (272 [90.7%]) of SRs were reported in specialty journals. Most reviews (213 [71.0%]) were categorized as therapeutic, and included a median of 16 studies involving 1,112 participants. Funding sources were not reported in more than one-third (122 [40.7%]) of the reviews. Reviews typically searched a median of three electronic databases and two other sources, although only about two-thirds (208 [69.3%]) of them reported the years searched. Most (197/295 [66.8%]) reviews reported information about quality assessment, while few (68/294 [23.1%]) reported assessing for publication bias. A little over half (161/300 [53.7%]) of the SRs reported combining their results statistically, of which most (147/161 [91.3%]) assessed for consistency across studies. Few (53 [17.7%]) SRs reported being updates of previously completed reviews. No review had a registration number. Only half (150 [50.0%]) of the reviews used the term “systematic review” or “meta-analysis” in the title or abstract. There were large differences between Cochrane reviews and non-Cochrane reviews in the quality of reporting several characteristics.
Conclusions
SRs are now produced in large numbers, and our data suggest that the quality of their reporting is inconsistent. This situation might be improved if more widely agreed upon evidence-based reporting guidelines were endorsed and adhered to by authors and journals. These results substantiate the view that readers should not accept SRs uncritically.
Data were collected on the epidemiological, descriptive, and reporting characteristics of recent systematic reviews. A descriptive analysis found inconsistencies in the quality of reporting.
Editors' Summary
Background.
In health care it is important to assess all the evidence available about what causes a disease or the best way to prevent, diagnose, or treat it. Decisions should not be made simply on the basis of—for example—the latest or biggest research study, but after a full consideration of the findings from all the research of good quality that has so far been conducted on the issue in question. This approach is known as “evidence-based medicine” (EBM). A report that is based on a search for studies addressing a clearly defined question, a quality assessment of the studies found, and a synthesis of the research findings, is known as a systematic review (SR). Conducting an SR is itself regarded as a research project and the methods involved can be quite complex. In particular, as with other forms of research, it is important to do everything possible to reduce bias. The leading role in developing the SR concept and the methods that should be used has been played by an international network called the Cochrane Collaboration (see “Additional Information” below), which was launched in 1992. However, SRs are now becoming commonplace. Many articles published in journals and elsewhere are described as being systematic reviews.
Why Was This Study Done?
Since systematic reviews are claimed to be the best source of evidence, it is important that they should be well conducted and that bias should not have influenced the conclusions drawn in the review. Just because the authors of a paper that discusses evidence on a particular topic claim that they have done their review “systematically,” it does not guarantee that their methods have been sound and that their report is of good quality. However, if they have reported details of their methods, then it can help users of the review decide whether they are looking at a review with conclusions they can rely on. The authors of this PLoS Medicine article wanted to find out how many SRs are now being published, where they are being published, and what questions they are addressing. They also wanted to see how well the methods of SRs are being reported.
What Did the Researchers Do and Find?
They picked one month and looked for all the SRs added to the main list of medical literature in that month. They found 300, on a range of topics and in a variety of medical journals. They estimate that about 20% of reviews appearing each year are published by the Cochrane Collaboration. They found many cases in which important aspects of the methods used were not reported. For example, about a third of the SRs did not report how (if at all) the quality of the studies found in the search had been assessed. An important assessment, which analyzes for “publication bias,” was reported as having been done in only about a quarter of the cases. Most of the reporting failures were in the “non-Cochrane” reviews.
What Do These Findings Mean?
The authors concluded that the standards of reporting of SRs vary widely and that readers should, therefore, not accept the conclusions of SRs uncritically. To improve this situation, they urge that guidelines be drawn up regarding how SRs are reported. The writers of SRs and also the journals that publish them should follow these guidelines.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040078.
An editorial discussing this research article and its relevance to medical publishing appears in the same issue of PLoS Medicine
A good source of information on the evidence-based approach to medicine is the James Lind Library
The Web site of the Cochrane Collaboration is a good source of information on systematic reviews. In particular there is a newcomers' guide and information for health care “consumers”. From this Web site, it is also possible to see summaries of the SRs published by the Cochrane Collaboration (readers in some countries can also view the complete SRs free of charge)
Information on the practice of evidence-based medicine is available from the US Agency for Healthcare Research and Quality and the Canadian Agency for Drugs and Technologies in Health
doi:10.1371/journal.pmed.0040078
PMCID: PMC1831728  PMID: 17388659
15.  Reporting and Methods in Clinical Prediction Research: A Systematic Review 
PLoS Medicine  2012;9(5):e1001221.
Walter Bouwmeester and colleagues investigated the reporting and methods of prediction studies in 2008, in six high-impact general medical journals, and found that the majority of prediction studies do not follow current methodological recommendations.
Background
We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.
Methods and Findings
We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.
Conclusions
The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
There are often times in our lives when we would like to be able to predict the future. Is the stock market going to go up, for example, or will it rain tomorrow? Being able predict future health is also important, both to patients and to physicians, and there is an increasing body of published clinical “prediction research.” Diagnostic prediction research investigates the ability of variables or test results to predict the presence or absence of a specific diagnosis. So, for example, one recent study compared the ability of two imaging techniques to diagnose pulmonary embolism (a blood clot in the lungs). Prognostic prediction research investigates the ability of various markers to predict future outcomes such as the risk of a heart attack. Both types of prediction research can investigate the predictive properties of patient characteristics, single variables, tests, or markers, or combinations of variables, tests, or markers (multivariable studies). Both types of prediction research can include also studies that build multivariable prediction models to guide patient management (model development), or that test the performance of models (validation), or that quantify the effect of using a prediction model on patient and physician behaviors and outcomes (impact assessment).
Why Was This Study Done?
With the increase in prediction research, there is an increased interest in the methodology of this type of research because poorly done or poorly reported prediction research is likely to have limited reliability and applicability and will, therefore, be of little use in patient management. In this systematic review, the researchers investigate the reporting and methods of prediction studies by examining the aims, design, participant selection, definition and measurement of outcomes and candidate predictors, statistical power and analyses, and performance measures included in multivariable prediction research articles published in 2008 in several general medical journals. In a systematic review, researchers identify all the studies undertaken on a given topic using a predefined set of criteria and systematically analyze the reported methods and results of these studies.
What Did the Researchers Do and Find?
The researchers identified all the multivariable prediction studies meeting their predefined criteria that were published in 2008 in six high impact general medical journals by browsing through all the issues of the journals (a hand search). They then scored the methods and reporting of each study using a comprehensive item list based on recent recommendations for the conduct of prediction research (for example, the reporting recommendations for tumor marker prognostic studies—the REMARK guidelines). Of 71 retrieved studies, 51 were predictor finding studies, 14 were prediction model development studies, three externally validated an existing model, and three reported on a model's impact on participant outcome. Study design, participant selection, definitions of outcomes and predictors, and predictor selection were generally well reported, but other methodological and reporting aspects of the studies were suboptimal. For example, despite many recommendations, continuous predictors were often dichotomized. That is, rather than using the measured value of a variable in a prediction model (for example, blood pressure in a cardiovascular disease prediction model), measurements were frequently assigned to two broad categories. Similarly, many of the studies failed to adequately estimate the sample size needed to minimize bias in predictor effects, and few of the model development papers quantified and validated the proposed model's predictive performance.
What Do These Findings Mean?
These findings indicate that, in 2008, most of the prediction research published in high impact general medical journals failed to follow current guidelines for the conduct and reporting of clinical prediction studies. Because the studies examined here were published in high impact medical journals, they are likely to be representative of the higher quality studies published in 2008. However, reporting standards may have improved since 2008, and the conduct of prediction research may actually be better than this analysis suggests because the length restrictions that are often applied to journal articles may account for some of reporting omissions. Nevertheless, despite some encouraging findings, the researchers conclude that the poor reporting and poor methods they found in many published prediction studies is a cause for concern and is likely to limit the reliability and applicability of this type of clinical research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001221.
The EQUATOR Network is an international initiative that seeks to improve the reliability and value of medical research literature by promoting transparent and accurate reporting of research studies; its website includes information on a wide range of reporting guidelines including the REMARK recommendations (in English and Spanish)
A video of a presentation by Doug Altman, one of the researchers of this study, on improving the reporting standards of the medical evidence base, is available
The Cochrane Prognosis Methods Group provides additional information on the methodology of prognostic research
doi:10.1371/journal.pmed.1001221
PMCID: PMC3358324  PMID: 22629234
16.  Access to Scientific Publications: The Scientist's Perspective 
PLoS ONE  2011;6(11):e27868.
Background
Scientific publishing is undergoing significant changes due to the growth of online publications, increases in the number of open access journals, and policies of funders and universities requiring authors to ensure that their publications become publicly accessible. Most studies of the impact of these changes have focused on the growth of articles available through open access or the number of open-access journals. Here, we investigated access to publications at a number of institutes and universities around the world, focusing on publications in HIV vaccine research – an area of biomedical research with special importance to the developing world.
Methods and Findings
We selected research papers in HIV vaccine research field, creating: 1) a first set of 50 most recently published papers with keywords “HIV vaccine” and 2) a second set of 200 articles randomly selected from those cited in the first set. Access to the majority (80%) of the recently published articles required subscription, while cited literature was much more accessible (67% freely available online). Subscriptions at a number of institutions around the world were assessed for providing access to subscription-only articles from the two sets. The access levels varied widely, ranging among institutions from 20% to 90%. Through the WHO-supported HINARI program, institutes in low-income countries had access comparable to that of institutes in the North. Finally, we examined the response rates for reprint requests sent to corresponding authors, a method commonly used before internet access became widespread. Contacting corresponding authors with requests for electronic copies of articles by email resulted in a 55-60% success rate, although in some cases it took up to 1.5 months to get a response.
Conclusions
While research articles are increasingly available on the internet in open access format, institutional subscriptions continue to play an important role. However, subscriptions do not provide access to the full range of HIV vaccine research literature. Access to papers through subscriptions is complemented by a variety of other means, including emailing corresponding authors, joint affiliations, use of someone else's login information and posting requests on message boards. This complex picture makes it difficult to assess the real ability of scientists to access literature, but the observed differences in access levels between institutions suggest an unlevel playing field, in which some researchers have to spend more efforts than others to obtain the same information.
doi:10.1371/journal.pone.0027868
PMCID: PMC3219702  PMID: 22114716
17.  Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices 
PLoS Medicine  2011;8(8):e1001069.
Carol Bennett and colleagues review the evidence and find that there is limited guidance and no consensus on the optimal reporting of survey research.
Background
Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.
Methods and Findings
We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).
Conclusions
There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.
Why Was This Study Done?
This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.
What Did the Researchers Do and Find?
The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.
The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.
What Do These Findings Mean?
Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069.
More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
More information about STROBE is available on the STROBE Statement web site
doi:10.1371/journal.pmed.1001069
PMCID: PMC3149080  PMID: 21829330
18.  Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others 
PLoS Medicine  2007;4(6):e184.
Background
Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons.
Methods and Findings
This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug.
Conclusions
RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.
Lisa Bero and colleagues found published trials comparing one statin with another were more likely to report results and conclusions favoring the sponsor's product than the comparison drug.
Editors' Summary
Background.
Randomized controlled trials are generally considered to be the most reliable type of experimental study for evaluating the effectiveness of different treatments. Randomization involves the assignment of participants in the trial to different treatment groups by the play of chance. Properly done, this procedure means that the different groups are comparable at outset, reducing the chance that outside factors could be responsible for treatment effects seen in the trial. When done properly, randomization also ensures that the clinicians recruiting participants into the trial cannot know the treatment group to which a patient will end up being assigned. However, despite these advantages, a large number of factors can still result in bias creeping in. Bias comes about when the findings of research appear to differ in some systematic way from the true result. Other research studies have suggested that funding is a source of bias; studies sponsored by drug companies seem to more often favor the sponsor's drug than trials not sponsored by drug companies
Why Was This Study Done?
The researchers wanted to more precisely understand the impact of different possible sources of bias in the findings of randomized controlled trials. In particular, they wanted to study the outcomes of “head-to-head” drug comparison studies for one particular class of drugs, the statins. Drugs in this class are commonly prescribed to reduce the levels of cholesterol in blood amongst people who are at risk of heart and other types of disease. This drug class is a good example for studying the role of bias in drug–drug comparison trials, because these trials are extensively used in decision making by health-policy makers.
What Did the Researchers Do and Find?
This research study was based on searching PubMed, a biomedical literature database, with the aim of finding all randomized controlled trials of statins carried out between January 1999 and May 2005 (reference lists also were searched). Only trials which compared one statin to another statin or one statin to another type of drug were included. The researchers extracted the following information from each article: the study's source of funding, aspects of study design, the overall results, and the authors' conclusions. The results were categorized to show whether the findings were favorable to the test drug (the newer statin), inconclusive, or not favorable to the test drug. Aspects of each study's design were also categorized in relation to various features, such as how well the randomization was done (in particular, the degree to which the processes used would have prevented physicians from knowing which treatment a patient was likely to receive on enrollment); whether all participants enrolled in the trial were eventually analyzed; and whether investigators or participants knew what treatment an individual was receiving.
One hundred and ninety-two trials were included in this study, and of these, 95 declared drug company funding; 23 declared government or other nonprofit funding while 74 did not declare funding or were not funded. Trials that were properly blinded (where participants and investigators did not know what treatment an individual received) were less likely to have conclusions favoring the test drug. However, large trials were more likely to favor the test drug than smaller trials. When looking specifically at the trials funded by drug companies, the researchers found various factors that predicted whether a result or conclusion favored the test drug. These included the impact of the journal publishing the results; the size of the trial; and whether funding came from the maker of the test drug. However, properly blinded trials were less likely to produce results favoring the test drug. Even once all other factors were accounted for, the funding source for the study was still linked with results and conclusions that favored the maker of the test drug.
What Do These Findings Mean?
This study shows that the type of sponsorship available for randomized controlled trials of statins was strongly linked to the results and conclusions of those studies, even when other factors were taken into account. However, it is not clear from this study why sponsorship has such a strong link to the overall findings. There are many possible reasons why this might be. Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial. Others have suggested that trials which produce unfavorable results are not published, or that unfavorable outcomes are suppressed. Whatever the reasons for these findings, the implications are important, and suggest that the evidence base relating to statins may be substantially biased.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040184.
The James Lind Library has been created to help people understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
The International Committee of Medical Journal Editors has provided guidance regarding sponsorship, authorship, and accountability
The CONSORT statement is a research tool that provides an evidence-based approach for reporting the results of randomized controlled trials
Good Publication Practice guidelines provide standards for responsible publication of research sponsored by pharmaceutical companies
Information from Wikipedia on Statins. Wikipedia is an internet encyclopedia anyone can edit
doi:10.1371/journal.pmed.0040184
PMCID: PMC1885451  PMID: 17550302
19.  Social Networks in Education of Health Professionals in Bosnia and Herzegovina – the Role of Pubmed/Medline in Improvement of Medical Sciences 
Acta Informatica Medica  2011;19(4):196-202.
Introduction:
Social network is a social structure made up of individuals and organizations that represent “nodes”, and they are associated with one or more types of interdependency; such as: friendship, common interests, work, knowledge, prestige and many other interests. Beginning with the late twentieth and early twenty-first century, the Internet was a significant additional tool in the education of teenagers. Later, it takes more and more significant role in educating students and professionals.
Goal:
The aim of this paper is to investigate, to what extent and how effectively the Internet is used today. In addition, more specifically, this paper will research the implications of the well-known social networks in education of students and health professionals in Bosnia and Herzegovina (B&H).
Material and methods:
We compared the ratio of using Medline, as the largest biomedical data base system for spreading medical information, as basics for health education at biomedical faculties at five universities in B&H.
Results and discussion:
According to data from the CRA (i.e. Communications Regulatory Agency) in B&H, in 2010, there were 522,364 internet access accounts, with about 2 million Internet users, representing about 52% of the total population. The Internet users’ preference is dominated by the users of fast broadband access (e.g. xDSL) with 42.8%, and elsewhere, still with dialup access, with 25.2%. The results showed that only 11.6% of professors use Facebook type of social network, 49.3% of them have a profile on BiomedExperts scientific social network and 79% have available articles in the largest biomedical literature database MEDLINE. Students are also frequent users of general social networks and educational clips from You Tube, which they prefer to utilize considerably more than the other types of professionals. Students rarely use the facilities of professional social networks, because they contain mainly data and information needed for further, postgraduate professional education. In our research, we analized cited published papers in the journal Medical Archives, the oldest medical journal in B&H (established in 1947) of randomly included 151 full and part time professors, authors from five medical faculties in B&H and B&H authors who currently work in the EU and USA.. ANOVA showed that there was no significant difference in the number of articles published between the Universities in Bosnia, but there was significant difference in the number of articles published on MEDLINE, between all faculties in B&H and a group of scientists who work around the world. Students’ tests showed that there was a statistically significant difference in the average number of papers published on Medline, between groups of part-time and full time professors. However, there were no statistically significant differences, between the professors for preclinical and clinical subjects.
Conclusion:
In B&H there are decent conditions for the use of online social networks in the education of health professionals. While students enthusiastically embraced these opportunities, this is not so much a case with health care professionals in practice; while scientific health care workers have not shown greater interest in the use of social networks, both for purposes of scientific research and in terms of self-education and training of students. There is much more use of the advantages offered by online social networks, both in education and in support of the scientific research.
doi:10.5455/aim.2011.19.196-202
PMCID: PMC3564181  PMID: 23408513
Social networks; education; health professionals; students; Bosnia and Herzegovina.
20.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
21.  Uptake of Home-Based Voluntary HIV Testing in Sub-Saharan Africa: A Systematic Review and Meta-Analysis 
PLoS Medicine  2012;9(12):e1001351.
Kalpana Sabapathy and colleagues conduct a systematic review and meta-analysis to assess the acceptability of home-based voluntary counseling and testing for HIV in sub-Saharan Africa with some encouraging results.
Introduction
Improving access to HIV testing is a key priority in scaling up HIV treatment and prevention services. Home-based voluntary counselling and testing (HBT) as an approach to delivering wide-scale HIV testing is explored here.
Methods and Findings
We conducted a systematic review and random-effects meta-analysis of studies published between 1 January 2000 and 24 September 2012 that reported on uptake of HBT in sub-Saharan Africa, to assess the proportion of individuals accepting HBT and receiving their test result.
Our initial search yielded 1,199 articles; 114 were reviewed as full-text articles, and 19 publications involving 21 studies (n = 524,867 individuals offered HBT) were included for final review and meta-analysis. The studies came from five countries: Uganda, Malawi, Kenya, South Africa, and Zambia.
The proportion of people who accepted HBT (n = 474,377) ranged from 58.1% to 99.8%, with a pooled proportion of 83.3% (95% CI: 80.4%–86.1%). Heterogeneity was high (τ2 = 0.11). Sixteen studies reported on the number of people who received the result of HBT (n = 432,835). The proportion of individuals receiving their results out of all those offered testing ranged from 24.9% to 99.7%, with a pooled proportion of 76.7% (95% CI: 73.4%–80.0%) (τ2 = 0.12). HIV prevalence ranged from 2.9% to 36.5%. New diagnosis of HIV following HBT ranged from 40% to 79% of those testing positive. Forty-eight percent of the individuals offered testing were men, and they were just as likely to accept HBT as women (pooled odds ratio = 0.84; 95% CI: 0.56–1.26) (τ2 = 0.33). The proportion of individuals previously tested for HIV among those offered a test ranged from 5% to 66%. Studies in which <30% of individuals had been previously tested, local HIV prevalence was <10%, incentives were provided, or HBT was offered to household members of HIV-positive individuals showed higher uptake of testing. No evidence was reported of negative consequences of HBT.
Conclusions
HBT could substantially increase awareness of HIV status in previously undiagnosed individuals in sub-Saharan Africa, with over three-quarters of the studies in this review reporting >70% uptake. It could be a valuable tool for treatment and prevention efforts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Knowledge of HIV status is crucial for both the prevention and treatment of HIV. However, according to the Joint United Nations Programme on HIV/AIDS (the UN agency responsible for HIV/AIDS), in low-and-middle-income countries only ten percent of those who need voluntary counseling and testing, because they may have been exposed to HIV infection, have access to this service. Even in health care settings in which voluntary counseling and HIV testing is routinely offered, such as to pregnant women, the number of people who use these services is low. This situation is partly because of the stigma and discrimination associated with HIV, which makes people reluctant to volunteer to come forward to be tested for HIV. To help overcome this problem, one important strategy in encouraging people to be tested for HIV is to offer them the opportunity to be counseled and tested at home—home-based voluntary counseling and testing (HBT). Using the HBT approach, people are visited in their home by health workers regardless of their perceived risk of HIV. HBT has obvious advantages and upholds the “3 Cs” principles of HIV testing: that testing is confidential, accompanied by counseling, and conducted only with informed consent.
Why Was This Study Done?
The HBT approach has received widespread international support, and the World Health Organization has recently published guidance to service providers and policy makers about the delivery of HBT. However, the acceptability of HBT, that is, whether those offered HBT actually take up the offer and are tested, remains unknown, especially in sub-Saharan Africa, the world region with the highest prevalence of HIV. So, in this study, the researchers systematically compiled all of the available studies on this topic from sub-Saharan Africa to determine the acceptability of HBT and also to and identify any factors associated with the uptake of HBT.
What Did the Researchers Do and Find?
The researchers searched several databases to identify suitable peer-reviewed studies from Africa published between January 2000 and September 2012. The researchers included studies that described any intervention to provide HIV testing at home and also reported the proportions of participants accepting HIV testing out of all individuals offered a home-based HIV test. Because different types of studies were included (such as randomized controlled trials, observational cohort studies, and cross-sectional surveys), the researchers tested the quality of included studies. Then they pooled all of the studies together to calculate the overall proportion of people who accepted HIV testing at home and the proportion who received their result.
Using these methods, the researchers included 21 studies from five African countries: Kenya, Malawi, South Africa, Uganda, and Zambia, comprising a total of 524,867 people. Overall, the proportion of people who accepted HBT ranged from 58.1% to 99.7%, with a pooled proportion of 83.3% accepting HBT (474,377 people). In the eight studies that separated data by gender, men were as likely as women to accept testing (78.5% versus 81.5%). Over three-quarters of everyone who accepted HBT received their result (77% in 16 studies reporting on this), and, importantly, the proportion of people with previously undiagnosed HIV was high (40%–79% of those diagnosed HIV-positive), emphasizing the value of HBT. The researchers also found that providing incentives, local HIV prevalence being less than 10%, and targeting HBT to household members of HIV-positive individuals may be factors associated with increased uptake of HBT, but further research is needed to verify the results of this subgroup analysis.
What Do These Findings Mean?
These findings suggest that voluntary counseling and testing for HIV at home is highly acceptable in five countries in sub-Saharan Africa, with the majority of those tested receiving their test result, highlighting the importance of this approach in the diagnosis of HIV. Therefore, by increasing uptake of testing, HBT may provide an effective tool for governments and health service providers to increase access to HIV treatment and prevention. However, testing is just the first step in the management of HIV, and this study does not address the follow-up of those who tested positive using the home-based approach, such as access to treatment, as well as repeated HBT for ongoing knowledge of HIV status. The option of self-testing was examined in only one of the studies included in this review, but the researchers identify that self-testing at home with the support HBT staff is an important area of future research. Overall, HBT has the potential to substantially increase awareness of HIV status in previously undiagnosed men and women in sub-Saharan Africa.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001351.
The World Health Organization provides extensive information on HIV testing and counseling, and the World Health Organization's guidance on home-based testing mentioned in this summary is also available
The Joint United Nations Programme on HIV/AIDS gives the latest facts and figures about the global status of HIV and about reducing stigma and discrimination around HIV
doi:10.1371/journal.pmed.1001351
PMCID: PMC3514284  PMID: 23226107
22.  Inadequate Dissemination of Phase I Trials: A Retrospective Cohort Study 
PLoS Medicine  2009;6(2):e1000034.
Background
Drug development is ideally a logical sequence in which information from small early studies (Phase I) is subsequently used to inform and plan larger, more definitive studies (Phases II–IV). Phase I trials are unique because they generally provide the first evaluation of new drugs in humans. The conduct and dissemination of Phase I trials have not previously been empirically evaluated. Our objective was to describe the initiation, completion, and publication of Phase I trials in comparison with Phase II–IV trials.
Methods and Findings
We reviewed a cohort of all protocols approved by a sample of ethics committees in France from January 1, 1994 to December 31, 1994. The comparison of 140 Phase I trials with 304 Phase II–IV trials, showed that Phase I studies were more likely to be initiated (133/140 [95%] versus 269/304 [88%]), more likely to be completed (127/133 [95%] versus 218/269 [81%]), and more likely to produce confirmatory results (71/83 [86%] versus 125/175 [71%]) than Phase II–IV trials. Publication was less frequent for Phase I studies (21/127 [17%] versus 93/218 [43%]), even if only accounting for studies providing confirmatory results (18/71 [25%] versus 79/125 [63%]).
Conclusions
The initiation, completion, and publications of Phase I trials are different from those of other studies. Moreover, the results of these trials should be published in order to ensure the integrity of the overall body of scientific knowledge, and ultimately the safety of future trial participants and patients.
François Chapuis and colleagues examine a cohort of clinical trial protocols approved by French ethics committees, and show that Phase I trials are less frequently published than other types of trials.
Editors' Summary
Background.
Before a new drug is used to treat patients, its benefits and harms have to be carefully investigated in clinical trials—studies that investigate the drug's effects on people. Because giving any new drug to people is potentially dangerous, drugs are first tested in a short “Phase I” trial in which a few people (usually healthy volunteers) are given doses of the drug likely to have a therapeutic effect. A Phase I trial evaluates the safety and tolerability of the drug and investigates how the human body handles the drug. It may also provide some information about the drug's efficacy that can guide the design of later trials. The next stage of clinical drug development is a Phase II trial in which the therapeutic efficacy of the drug is investigated by giving more patients and volunteers different doses of the drug. Finally, several large Phase III trials are undertaken to confirm the evidence collected in the Phase II trial about the drug's efficacy and safety. If the Phase III trials are successful, the drug will receive official marketing approval. In some cases, this approval requires Phase IV (postapproval) trials to be done to optimize the drug's use in clinical practice.
Why Was This Study Done?
In an ideal world, the results of all clinical trials on new drugs would be published in medical journals so that doctors and patients could make fully informed decisions about the treatments available to them. Unfortunately, this is not an ideal world and, for example, it is well known that the results of Phase III trials in which a new drug outperforms a standard treatment are more likely to be published than those in which the new drug performs badly or has unwanted side effects (an example of “publication bias”). But what about the results of Phase I trials? These need to be widely disseminated so that researchers can avoid unknowingly exposing people to potentially dangerous new drugs after similar drugs have caused adverse side effects. However, drug companies are often reluctant to disclose information on early phase trials. In this study, the researchers ask whether the dissemination of the results of Phase I trials is adequate.
What Did the Researchers Do and Find?
The researchers identified 667 drug trial protocols approved in 1994 by 25 French research ethics committees (independent panels of experts that ensure that the rights, safety, and well-being of trial participants are protected). In 2001, questionnaires were mailed to each trial's principal investigator asking whether the trial had been started and completed and whether its results had been published in a medical journal or otherwise disseminated (for example, by presentation at a scientific meeting). 140 questionnaires for Phase I trials and 304 for Phase II–IV trials were returned and analyzed by the investigators. They found that Phase I trials were more likely to have been started and to have been completed than Phase II–IV trials. The results of 86% of the Phase I studies matched the researchers' expectations, but the study hypothesis was confirmed in only 71% of the Phase II–IV trials. Finally, the results of 17% of the Phase I studies were published in scientific journals compared to 43% of the Phase II–IV studies. About half of the Phase I study results were not disseminated in any form.
What Do These Findings Mean?
These findings suggest that the fate of Phase I trials is different from that of other clinical trials and that there is inadequate dissemination of the results of these early trials. These findings may not be generalizable to other countries and may be affected by the poor questionnaire response rate. Nevertheless, they suggest that steps need to be taken to ensure that the results of Phase I studies are more widely disseminated. Recent calls by the World Health Organization and other bodies for mandatory preregistration in trial registries of all Phase I trials as well as all Phase II–IV trials should improve the situation by providing basic information about Phase I trials whose results are not published in full elsewhere.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000034.
Two recent research articles published in PLoS Medicine—by Ida Sim and colleagues (PLoS Med e191) and by Lisa Bero and colleagues (PLoS Med e217)—investigate publication bias in Phase III trials
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the US Food and Drug Administration (the body that approves drugs in the USA) Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.1000034
PMCID: PMC2642878  PMID: 19226185
23.  A Systematic Review and Meta-Analysis of Utility-Based Quality of Life in Chronic Kidney Disease Treatments 
PLoS Medicine  2012;9(9):e1001307.
Melanie Wyld and colleagues examined previously published studies to assess pooled utility-based quality of life of the various treatments for chronic kidney disease. They conclude that the highest utility was for kidney transplants, with home-based automated peritoneal dialysis being second.
Background
Chronic kidney disease (CKD) is a common and costly condition to treat. Economic evaluations of health care often incorporate patient preferences for health outcomes using utilities. The objective of this study was to determine pooled utility-based quality of life (the numerical value attached to the strength of an individual's preference for a specific health outcome) by CKD treatment modality.
Methods and Findings
We conducted a systematic review, meta-analysis, and meta-regression of peer-reviewed published articles and of PhD dissertations published through 1 December 2010 that reported utility-based quality of life (utility) for adults with late-stage CKD. Studies reporting utilities by proxy (e.g., reported by a patient's doctor or family member) were excluded.
In total, 190 studies reporting 326 utilities from over 56,000 patients were analysed. There were 25 utilities from pre-treatment CKD patients, 226 from dialysis patients (haemodialysis, n = 163; peritoneal dialysis, n = 44), 66 from kidney transplant patients, and three from patients treated with non-dialytic conservative care. Using time tradeoff as a referent instrument, kidney transplant recipients had a mean utility of 0.82 (95% CI: 0.74, 0.90). The mean utility was comparable in pre-treatment CKD patients (difference = −0.02; 95% CI: −0.09, 0.04), 0.11 lower in dialysis patients (95% CI: −0.15, −0.08), and 0.2 lower in conservative care patients (95% CI: −0.38, −0.01). Patients treated with automated peritoneal dialysis had a significantly higher mean utility (0.80) than those on continuous ambulatory peritoneal dialysis (0.72; p = 0.02). The mean utility of transplant patients increased over time, from 0.66 in the 1980s to 0.85 in the 2000s, an increase of 0.19 (95% CI: 0.11, 0.26). Utility varied by elicitation instrument, with standard gamble producing the highest estimates, and the SF-6D by Brazier et al., University of Sheffield, producing the lowest estimates. The main limitations of this study were that treatment assignments were not random, that only transplant had longitudinal data available, and that we calculated EuroQol Group EQ-5D scores from SF-36 and SF-12 health survey data, and therefore the algorithms may not reflect EQ-5D scores measured directly.
Conclusions
For patients with late-stage CKD, treatment with dialysis is associated with a significant decrement in quality of life compared to treatment with kidney transplantation. These findings provide evidence-based utility estimates to inform economic evaluations of kidney therapies, useful for policy makers and in individual treatment discussions with CKD patients.
Editors' Summary
Background
Ill health can adversely affect an individual's quality of life, particularly if caused by long-term (chronic) conditions, such as chronic kidney disease—in the United States alone, 23 million people have chronic kidney disease, of whom 570,000 are treated with dialysis or kidney transplantation. In order to measure the cost-effectiveness of interventions to manage medical conditions, health economists use an objective measurement known as quality-adjusted life years. However, although useful, quality-adjusted life years are often criticized for not taking into account the views and preferences of the individuals with the medical conditions. A measurement called a utility solves this problem. Utilities are a numerical value (measured on a 0 to 1 scale, where 0 represents death and 1 represents full health) of the strength of an individual's preference for specified health-related outcomes, as measured by “instruments” (questionnaires) that rate direct comparisons or assess quality of life.
Why Was This Study Done?
Previous studies have suggested that, in people with chronic kidney disease, quality of life (as measured by utility) is higher in those with a functioning kidney transplant than in those on dialysis. However, currently, it is unclear whether the type of dialysis affects quality of life: hemodialysis is a highly technical process that directly filters the blood, usually must be done 2–4 times a week, and can only be performed in a health facility; peritoneal dialysis, in which fluids are infused into the abdominal cavity, can be done nightly at home (automated peritoneal dialysis) or throughout the day (continuous ambulatory peritoneal dialysis). In this study, the researchers reviewed and assimilated all of the available evidence to investigate whether quality of life in people with chronic kidney disease (as measured by utility) differed according to treatment type.
What Did the Researchers Do and Find?
The researchers did a comprehensive search of 11 databases to identify all relevant studies that included people with severe (stage 3, 4, or 5) chronic kidney disease, their form of treatment, and information on utilities—either reported directly, or included in quality of life instruments (SF-36), so the researchers could calculate utilities by using a validated algorithm. The researchers also recorded the prevalence rates of diabetes in study participants. Then, using statistical models that adjusted for various factors, including treatment type and the method of measuring utilities, the researchers were able to calculate the pooled utilities of each form of treatment for chronic kidney disease.
The researchers included 190 studies, representing over 56,000 patients and generating 326 utility estimates, in their analysis. The majority of utilities (77%) were derived through the SF-36 questionnaire via calculation. Of the 326 utility estimates, 25 were from patients pre-dialysis, 226 were from dialysis patients (the majority of whom were receiving hemodialysis), 66 were from kidney transplant patients, and three were from conservative care patients. The researchers found that the highest average utility was for those who had renal transplantation, 0.82, followed by the pre-dialysis group (0.80), dialysis patients (0.71), and, finally, patients receiving conservative care (0.62). When comparing the type of dialysis, the researchers found that there was little difference in utility between hemodialysis and peritoneal dialysis, but patients using automated peritoneal dialysis had, on average, a higher utility (0.80) than those treated with continuous ambulatory peritoneal dialysis (0.72). Finally, the researchers found that patient groups with diabetes had significantly lower utilities than those without diabetes.
What Do These Findings Mean?
These findings suggest that in people with chronic kidney disease, renal transplantation is the best treatment option to improve quality of life. For those on dialysis, home-based automated peritoneal dialysis may improve quality of life more than the other forms of dialysis: this finding is important, as this type of dialysis is not as widely used as other forms and is also cheaper than hemodialysis. Furthermore, these findings suggest that patients who choose conservative care have significantly lower quality of life than patients treated with dialysis, a finding that warrants further investigation. Overall, in addition to helping to inform economic evaluations of treatment options, the information from this analysis can help guide clinicians caring for patients with chronic kidney disease in their discussions about possible treatment options.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001307.
Information about chronic kidney disease is available from the National Kidney Foundation and MedlinePlus
Wikipedia gives information on general utilities (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001307
PMCID: PMC3439392  PMID: 22984353
24.  Conflict of Interest Reporting by Authors Involved in Promotion of Off-Label Drug Use: An Analysis of Journal Disclosures 
PLoS Medicine  2012;9(8):e1001280.
Aaron Kesselheim and colleagues investigate conflict of interest disclosures in articles authored by physicians and scientists identified in whistleblower complaints alleging illegal off-label marketing by pharmaceutical companies.
Background
Litigation documents reveal that pharmaceutical companies have paid physicians to promote off-label uses of their products through a number of different avenues. It is unknown whether physicians and scientists who have such conflicts of interest adequately disclose such relationships in the scientific publications they author.
Methods and Findings
We collected whistleblower complaints alleging illegal off-label marketing from the US Department of Justice and other publicly available sources (date range: 1996–2010). We identified physicians and scientists described in the complaints as having financial relationships with defendant manufacturers, then searched Medline for articles they authored in the subsequent three years. We assessed disclosures made in articles related to the off-label use in question, determined the frequency of adequate disclosure statements, and analyzed characteristics of the authors (specialty, author position) and articles (type, connection to off-label use, journal impact factor, citation count/year). We identified 39 conflicted individuals in whistleblower complaints. They published 404 articles related to the drugs at issue in the whistleblower complaints, only 62 (15%) of which contained an adequate disclosure statement. Most articles had no disclosure (43%) or did not mention the pharmaceutical company (40%). Adequate disclosure rates varied significantly by article type, with commentaries less likely to have adequate disclosure compared to articles reporting original studies or trials (adjusted odds ratio [OR] = 0.10, 95%CI = 0.02–0.67, p = 0.02). Over half of the authors (22/39, 56%) made no adequate disclosures in their articles. However, four of six authors with ≥25 articles disclosed in about one-third of articles (range: 10/36–8/25 [28%–32%]).
Conclusions
One in seven authors identified in whistleblower complaints as involved in off-label marketing activities adequately disclosed their conflict of interest in subsequent journal publications. This is a much lower rate of adequate disclosure than has been identified in previous studies. The non-disclosure patterns suggest shortcomings with authors and the rigor of journal practices.
Please see later in the article for the Editors' Summary
Editor's Summary
Background
Off-label use of pharmaceuticals is the practice of prescribing a drug for a condition or age group, or in a dose or form of administration, that has not been specifically approved by a formal regulatory body, such as the US Food and Drug Administration (FDA). Off-label prescribing is common all over the world. In the US, although it is legal for doctors to prescribe drugs off-label and discuss such clinical uses with colleagues, it is illegal for pharmaceutical companies to directly promote off-label uses of any of their products. Revenue from off-label uses can be lucrative for drug companies and even surpass the income from approved uses. Therefore, many pharmaceutical companies have paid physicians and scientists to promote off-label use of their products as part of their marketing programs.
Why Was This Study Done?
Recently, a number of pharmaceutical companies have been investigated in the US for illegal marketing programs that promote off-label uses of their products and have had to pay billions of dollars in court settlements. As part of these investigations, doctors and scientists were identified who were paid by the companies to deliver lectures and conduct other activities to support off-label uses. When the same physicians and scientists also wrote articles about these drugs for medical journals, their financial relationships would have constituted clear conflicts of interest that should have been declared alongside the journal articles. So, in this study, the researchers identified such authors, examined their publications, and assessed the adequacy of conflict of interest disclosures made in these publications.
What Did the Researchers Do and Find?
The researchers used disclosed information from the US Department of Justice, media reports, and data from a non-governmental organization that tracks federal fraud actions, to find whistleblower complaints alleging illegal off-label promotion. Then they identified the doctors and scientists described in the complaints as having financial relationships with the defendant drug companies and searched Medline for articles authored by these experts in the subsequent three years. Using a four step approach, the researchers assessed the adequacy of conflict of interest disclosures made in articles relating to the off-label uses in question.
Using these methods, the researchers examined 26 complaints alleging illegal off-label promotion and identified the 91 doctors and scientists recorded as being involved in this practice. The researchers found 39 (43%) of these 91 experts had authored 404 related publications. In the complaints, these 39 experts were alleged to have engaged in 42 relationships with the relevant drug company: the most common activity was acting as a paid speaker (n = 26, 62%) but also writing reviews or articles on behalf of the company (n = 7), acting as consultants or advisory board members (n = 3), and receiving gifts/honoraria (n = 3), research support funds (n = 2), and educational support funds (n = 1). However, the researchers found that only 62 (15%) of the 404 related articles had adequate disclosures—43% (148) had no disclosure at all, 4% had statements denying any conflicts of interest, 40% had disclosures that did not mention the drug manufacturer, and 13% had disclosures that mentioned the manufacturer but inadequately conveyed the nature of the relationship between author and drug manufacturer reported in the complaint. The researchers also found that adequate disclosure rates varied significantly by article type, with commentaries significantly less likely to have adequate disclosure compared to articles reporting studies or trials.
What Do These Findings Mean?
These findings show the substantial deficiencies in the adequacy of conflict-of-interest disclosures made by authors who had been paid by pharmaceutical manufacturers as part of off-label marketing activities: only one in seven authors fully disclosed their conflict of interest in their published articles. This low figure is troubling and suggests that approaches to controlling the effects of conflicts of interest that rely on author candidness are inadequate and furthermore, journal practices are not robust enough and need to be improved. In the meantime, readers have no option but to interpret conflict of interest disclosures, particularly in relation to off-label uses, with caution.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001280.
The US FDA provides a guide on the use of off-label drugs
The US Agency for Healthcare Research and Quality offers a patient guide to off-label drugs
ProPublica offers a web-based tool to identify physicians who have financial relationships with certain pharmaceutical companies
Wikipedia has a good description of off-label drug use (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The Institute for Medicine as a Profession maintains a list of policies regulating physicians' financial relationships that are in place at US-based academic medical centers
doi:10.1371/journal.pmed.1001280
PMCID: PMC3413710  PMID: 22899894
25.  RE-AIM evaluation of the Veterans Health Administration’s MOVE! Weight Management Program 
ABSTRACT
Over one-third of patients treated in the Veterans Health Administration (VHA) are obese. VHA introduced the MOVE! Weight Management Program for Veterans in 2006 to provide comprehensive weight management services. An evolving, periodic evaluation using the RE-AIM framework (reach, effectiveness, adoption, implementation, and maintenance) has been conducted to gauge success and opportunities for improvement. Key metrics were identified in each RE-AIM dimension. Data were compiled over fiscal years (FY) 2006 through 2010 from a variety of sources including VHA administrative and clinical databases, electronic medical record reviews, and an annual, structured VHA facility self-report. REACH: Screening for obesity and offering weight management treatment to eligible patients increased from 66% to 95% over the past 3 years. MOVE! is currently provided at every VHA hospital facility and at over one-half of VHA community-based outpatient clinics. The percent of eligible patients who participate in at least one weight management visit has doubled since implementation began but has stabilized at 10 to 12%. EFFECTIVENESS: About 18.6% of the 31,854 patients with available weight data who participated in at least two treatment visits between Jul 1, 2008 and Sep 30, 2009 had at least a 5% body weight loss by 6 months as did almost one-third of those who participated in more intense and sustained treatment. By contrast, only 12.5% of a comparison group of patients matched on age, gender, body mass index (BMI) class, and comorbidity status who were not treated with MOVE! had at least a 5% body weight loss. ADOPTION: The median full-time staff equivalent providing weight management services at each facility has increased over time and was 1.76 in FY 2010. IMPLEMENTATION: Staff from multiple disciplines typically provide MOVE!-related care although not all disciplines are involved with providing care at every facility. Group-based treatment has become increasingly utilized, and in FY 2010 it represented 72% of all MOVE!-related visits. Intensity of treatment has increased from an average of 3.6 visits per patient per year in FY 2007 to 4.6 in FY 2010, but more than half of patients have two visits or less. Almost all facilities now report the consistent use of key evidence-based behavioral strategies with patients. MAINTENANCE: While participation in MOVE! by patients continues to grow each year, facility self-reported program staffing and space/equipment challenges are potential barriers to long-term program maintenance. Evidence-based weight management treatment can be delivered at VHA medical centers and community-based outpatient clinics, but the REACH remains limited after several years of implementation. Intense and sustained treatment with MOVE! results in a modest positive impact on short-term weight loss outcomes, but a relatively small proportion of patients engage in this level of care. Increasing reach, improving effectiveness of care, and keeping patients engaged in treatment are areas for future policy, practice, and research.
Electronic supplementary material
The online version of this article (doi:10.1007/s13142-011-0077-4) contains supplementary material, which is available to authorized users.
doi:10.1007/s13142-011-0077-4
PMCID: PMC3717682  PMID: 24073079
Obesity; Overweight; Weight management; Weight loss; Weight reduction; Body weight changes; Veterans; Veterans health; Program evaluation; Program effectiveness

Results 1-25 (521569)