PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (540648)

Clipboard (0)
None

Related Articles

1.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
2.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
3.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
4.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
5.  Funding free and universal access to Journal of Neuroinflammation 
Journal of Neuroinflammation is an Open Access, online journal published by BioMed Central. Open Access publishing provides instant and universal availability of published work to any potential reader, worldwide, completely free of subscriptions, passwords, and charges. Further, authors retain copyright for their work, facilitating its dissemination. Open Access publishing is made possible by article-processing charges assessed "on the front end" to authors, their institutions, or their funding agencies. Beginning November 1, 2004, the Journal of Neuroinflammation will introduce article-processing charges of around US$525 for accepted articles. This charge will be waived for authors from institutions that are BioMed Central members, and in additional cases for reasons of genuine financial hardship. These article-processing charges pay for an electronic submission process that facilitates efficient and thorough peer review, for publication costs involved in providing the article freely and universally accessible in various formats online, and for the processes required for the article's inclusion in PubMed and its archiving in PubMed Central, e-Depot, Potsdam and INIST. There is no remuneration of any kind provided to the Editors-in-Chief, to any members of the Editorial Board, or to peer reviewers; all of whose work is entirely voluntary. Our article-processing charge is less than charges frequently levied by traditional journals: the Journal of Neuroinflammation does not levy any additional page or color charges on top of this fee, and there are no reprint costs as publication-quality pdf files are provided, free, for distribution in lieu of reprints. Our article-processing charge will enable full, immediate, and continued Open Access for all work published in Journal of Neuroinflammation. The benefits from such Open Access will accrue to readers, through unrestricted access; to authors, through the widest possible dissemination of their work; and to science and society in general, through facilitation of information availability and scientific advancement.
doi:10.1186/1742-2094-1-19
PMCID: PMC528856  PMID: 15485579
6.  Status of open access in the biomedical field in 2005*† 
Objectives:
This study was designed to document the state of open access (OA) in the biomedical field in 2005.
Methods:
PubMed was used to collect bibliographic data on target articles published in 2005. PubMed, Google Scholar, Google, and OAIster were then used to establish the availability of free full text online for these publications. Articles were analyzed by type of OA, country, type of article, impact factor, publisher, and publishing model to provide insight into the current state of OA.
Results:
Twenty-seven percent of all the articles were accessible as OA articles. More than 70% of the OA articles were provided through journal websites. Mid-rank commercial publishers often provided OA articles in OA journals, while society publishers tended to provide OA articles in the context of a traditional subscription model. The rate of OA articles available from the websites of individual authors or in institutional repositories was quite low.
Discussion/Conclusions:
In 2005, OA in the biomedical field was achieved under an umbrella of existing scholarly communication systems. Typically, OA articles were published as part of subscription journals published by scholarly societies. OA journals published by BioMed Central contributed to a small portion of all OA articles.
doi:10.3163/1536-5050.97.1.002
PMCID: PMC2605039  PMID: 19159007
7.  Retrieval of Publications Addressing Shared Decision Making: An Evaluation of Full-Text Searches on Medical Journal Websites 
JMIR Research Protocols  2015;4(2):e38.
Background
Full-text searches of articles increase the recall, defined by the proportion of relevant publications that are retrieved. However, this method is rarely used in medical research due to resource constraints. For the purpose of a systematic review of publications addressing shared decision making, a full-text search method was required to retrieve publications where shared decision making does not appear in the title or abstract.
Objective
The objective of our study was to assess the efficiency and reliability of full-text searches in major medical journals for identifying shared decision making publications.
Methods
A full-text search was performed on the websites of 15 high-impact journals in general internal medicine to look up publications of any type from 1996-2011 containing the phrase “shared decision making”. The search method was compared with a PubMed search of titles and abstracts only. The full-text search was further validated by requesting all publications from the same time period from the individual journal publishers and searching through the collected dataset.
Results
The full-text search for “shared decision making” on journal websites identified 1286 publications in 15 journals compared to 119 through the PubMed search. The search within the publisher-provided publications of 6 journals identified 613 publications compared to 646 with the full-text search on the respective journal websites. The concordance rate was 94.3% between both full-text searches.
Conclusions
Full-text searching on medical journal websites is an efficient and reliable way to identify relevant articles in the field of shared decision making for review or other purposes. It may be more widely used in biomedical research in other fields in the future, with the collaboration of publishers and journals toward open-access data.
doi:10.2196/resprot.3615
PMCID: PMC4405619  PMID: 25854180
information storage and retrieval; systematic reviews; PubMed; text mining; full-text search; decision making; shared decision making
8.  For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases 
PeerJ  2015;3:e972.
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases’ criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ’s coverage of biomedical OA journals compared with the conventional biomedical databases.
Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined.
Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ.
Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.
doi:10.7717/peerj.972
PMCID: PMC4451041  PMID: 26038727
DOAJ; Directory of Open Access Journals; Databases; Literature; Open access; Biomedicine
9.  Selection in Reported Epidemiological Risks: An Empirical Assessment 
PLoS Medicine  2007;4(3):e79.
Background
Epidemiological studies may be subject to selective reporting, but empirical evidence thereof is limited. We empirically evaluated the extent of selection of significant results and large effect sizes in a large sample of recent articles.
Methods and Findings
We evaluated 389 articles of epidemiological studies that reported, in their respective abstracts, at least one relative risk for a continuous risk factor in contrasts based on median, tertile, quartile, or quintile categorizations. We examined the proportion and correlates of reporting statistically significant and nonsignificant results in the abstract and whether the magnitude of the relative risks presented (coined to be consistently ≥1.00) differs depending on the type of contrast used for the risk factor. In 342 articles (87.9%), ≥1 statistically significant relative risk was reported in the abstract, while only 169 articles (43.4%) reported ≥1 statistically nonsignificant relative risk in the abstract. Reporting of statistically significant results was more common with structured abstracts, and was less common in US-based studies and in cancer outcomes. Among 50 randomly selected articles in which the full text was examined, a median of nine (interquartile range 5–16) statistically significant and six (interquartile range 3–16) statistically nonsignificant relative risks were presented (p = 0.25). Paradoxically, the smallest presented relative risks were based on the contrasts of extreme quintiles; on average, the relative risk magnitude was 1.41-, 1.42-, and 1.36-fold larger in contrasts of extreme quartiles, extreme tertiles, and above-versus-below median values, respectively (p < 0.001).
Conclusions
Published epidemiological investigations almost universally highlight significant associations between risk factors and outcomes. For continuous risk factors, investigators selectively present contrasts between more extreme groups, when relative risks are inherently lower.
An evaluation of published articles reporting epidemiological studies found that they almost universally highlight significant associations between risk factors and outcomes.
Editors' Summary
Background.
Medical and scientific researchers use statistical tests to try to work out whether their observations—for example, seeing a difference in some characteristic between two groups of people—might have occurred as a result of chance alone. Statistical tests cannot determine this for sure, rather they can only give a probability that the observations would have arisen by chance. When researchers have many different hypotheses, and carry out many statistical tests on the same set of data, they run the risk of concluding that there are real differences where in fact there are none. At the same time, it has long been known that scientific and medical researchers tend to pick out the findings on which to report in their papers. Findings that are more interesting, impressive, or statistically significant are more likely to be published. This is termed “publication bias” or “selective reporting bias.” Therefore, some people are concerned that the published scientific literature might contain many false-positive findings, i.e., findings that are not true but are simply the result of chance variation in the data. This would have a serious impact on the accuracy of the published scientific literature and would tend to overestimate the strength and direction of relationships being studied.
Why Was This Study Done?
Selective reporting bias has already been studied in detail in the area of randomized trials (studies where participants are randomly allocated to receive an intervention, e.g., a new drug, versus an alternative intervention or “comparator,” in order to understand the benefits or safety of the new intervention). These studies have shown that very many of the findings of trials are never published, and that statistically significant findings are more likely to be included in published papers than nonsignificant findings. However, much medical research is carried out that does not use randomized trial methods, either because that method is not useful to answer the question at hand or is unethical. Epidemiological research is often concerned with looking at links between risk factors and the development of disease, and this type of research would generally use observation rather than experiment to uncover connections. The researchers here were concerned that selective reporting bias might be just as much of a problem in epidemiological research as in randomized trials research, and wanted to study this specifically.
What Did the Researchers Do and Find?
In this investigation, searches were carried out of PubMed, a database of biomedical research studies, to extract epidemiological studies that were published between January 2004 and October 2005. The researchers wanted to specifically look at studies reporting the effect of continuous risk factors and their effect on health or disease outcomes (a continuous risk factor is something like age or glucose concentration in the blood, is a number, and can have any value on a sliding scale). Three hundred and eighty-nine original research studies were found, and the researchers pulled out from the abstracts and full text of these papers the relative risks that were reported along with the results of statistical tests for them. (Relative risk is the chance of getting an outcome, say disease, in one group as compared to another group.) The researchers found that nearly 90% of these studies had one or more statistically significant risks reported in the abstract, but only 43% reported one or more risks that were not statistically significant. When looking at all of the findings reported anywhere in the full text for 50 of these studies, the researchers saw that papers overall reported more statistically significant risks than nonsignificant risks. Finally, it seemed that in the set of papers studied here, the way in which statistical analyses were done produced a bias towards more extreme findings: for datasets showing small relative risks, papers were more likely to report a comparison between extreme subsets of the data so as to report larger relative risks.
What Do These Findings Mean?
These findings suggest that there is a tendency among epidemiology researchers to highlight statistically significant findings and to avoid highlighting nonsignificant findings in their research papers. This behavior may be a problem, because many of these significant findings could in future turn out to be “false positives.” At present, registers exist for researchers to describe ongoing clinical trials, and to set out the outcomes that they plan to analyze for those trials. These registers will go some way towards addressing some of the problems described here, but only for clinical trials research. Registers do not yet exist for epidemiological studies, and therefore it is important that researchers and readers are aware of and cautious about the problem of selective reporting in epidemiological research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040079.
Wikipedia entry on publication bias (note: Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors gives guidelines for submitting manuscripts to its member journals, and includes comments about registration of ongoing studies and the obligation to publish negative studies
ClinicalTrials.gov and the ISRCTN register are two registries of ongoing clinical trials
doi:10.1371/journal.pmed.0040079
PMCID: PMC1808481  PMID: 17341129
10.  The Scientific Conferences Organized During War Time (1992-1995) in Sarajevo 
Materia Socio-Medica  2011;23(4):238-248.
Author of this paper spent 1479 days in the siege of Sarajevo, during the period of war time in Bosnia and Herzegovina (B&H). This siege, lasting from 1992 to 1995 (e.g. Dayton Piece agreement was signed in November, 1995) represents the longest siege in the history of the world. Besides usual daily work, as the associate professor of Health education, Medical deontology and Medical informatics for the students of the Faculty of medicine, Faculty of dental medicine, Faculty of Pharmacy and Nursing college of University of Sarajevo, the author organized by himself and contributors, 10 scientific conferences in a sieged Sarajevo. All presented papers at those conferences are published in Proceedings abstract books, as the proof of continuing scientific work, in Sarajevo and other cities in B&H. Additionally, the author continued to publish, in that time, unique PubMed/MedLine indexed journal, - Medical Archives, (i.e. established in 1947) and, in 1993 formed a new journal named - “Acta Informatica Medica” (AIM) , as the Journal of the Bosnian Society of Medical informatics. Bosnian Society of Medical Informatics, thus became the first scientific association from Bosnia and Herzegovina, included in 1994, in the European Federation of Medical Informatics (EFMI) and the International Medical Informatics Assiciation (IMIA) , which was “miracle” from the besieged Sarajevo and war time result of aggression on Bosnia and Herzegovina. It should be noted that the importance of maintaining these academic gatherings, in the circumstances of war, was multifaceted. First of all, thanks to these meetings, the continuity of scientific meetings and activities in the besieged city of Sarajevo was not broken, as well as the continuity of scientific publication, which was crucial for the maintenance of the teaching staff at the university and, finally, in the expansion of the “scientific truth” about what happened in Sarajevo and B&H in these difficult times. All of this was critical to the “survival” of B&H and its people. Some of the published articles, especially in the Medical Archives journal, which even in difficult war conditions did not break the continuity of its publication, and then it was the only scientific journal indexed in B&H, having been consequently cited in the major biomedical data bases in the world. Many scientists abroad have had the opportunity to learn about some of the wonders of Sarajevo “war medicine”, thanks to this journal. Finally, despite the fact that it is another way of expressing its resistance to the aggression on B&H, the organized symposia in the war represented the continuity of the scientific research activities. Bosnia and Herzegovina and Sarajevo under siege, in this way, kept in touch with the civilized world and modern achievements, despite the fact that they were victims of medieval barbarism. In addition, these meetings sent a powerful message to the world about the willingness to register and systematize all the war experiences, especially those related to medicine and medical practice, in terms of what Europe has not known, since the Second World War. Partially, we succeeded in that. The total number of 286 presentations were presented in seven war Conferences, as quantitative and qualitative contribution to the scientific activities, despite the inhuman conditions, in which these articles emerged. These presentations and Conferences testify to the enthusiasm of B&H community and academic institutions that have collaborated with it. Authors and co-authors presented the “war” articles that deserve to be mentioned in the monograph “1479 days of the siege of Sarajevo”. Unfortunately, many of these brave authors are not alive and cannot read this. The task for us remains to remember them by their own good. Old Persian proverb says; “The event which is not recorded is as like it had never happened”. Sapienti sat.
doi:10.5455/msm.2011.23.238-248
PMCID: PMC3633541  PMID: 23678305
Bosnia and Herzegovina; siege; scientific meetings during wartime.
11.  ON-LINE BIOMEDICAL DATABASES–THE BEST SOURCE FOR QUICK SEARCH OF THE SCIENTIFIC INFORMATION IN THE BIOMEDICINE 
Acta Informatica Medica  2012;20(2):72-84.
Most of medical journals now has it’s electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals.
doi:10.5455/aim.2012.20.72-84
PMCID: PMC3544328  PMID: 23322957
medical journals; on-line databases.
12.  Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others 
PLoS Medicine  2007;4(6):e184.
Background
Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons.
Methods and Findings
This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug.
Conclusions
RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.
Lisa Bero and colleagues found published trials comparing one statin with another were more likely to report results and conclusions favoring the sponsor's product than the comparison drug.
Editors' Summary
Background.
Randomized controlled trials are generally considered to be the most reliable type of experimental study for evaluating the effectiveness of different treatments. Randomization involves the assignment of participants in the trial to different treatment groups by the play of chance. Properly done, this procedure means that the different groups are comparable at outset, reducing the chance that outside factors could be responsible for treatment effects seen in the trial. When done properly, randomization also ensures that the clinicians recruiting participants into the trial cannot know the treatment group to which a patient will end up being assigned. However, despite these advantages, a large number of factors can still result in bias creeping in. Bias comes about when the findings of research appear to differ in some systematic way from the true result. Other research studies have suggested that funding is a source of bias; studies sponsored by drug companies seem to more often favor the sponsor's drug than trials not sponsored by drug companies
Why Was This Study Done?
The researchers wanted to more precisely understand the impact of different possible sources of bias in the findings of randomized controlled trials. In particular, they wanted to study the outcomes of “head-to-head” drug comparison studies for one particular class of drugs, the statins. Drugs in this class are commonly prescribed to reduce the levels of cholesterol in blood amongst people who are at risk of heart and other types of disease. This drug class is a good example for studying the role of bias in drug–drug comparison trials, because these trials are extensively used in decision making by health-policy makers.
What Did the Researchers Do and Find?
This research study was based on searching PubMed, a biomedical literature database, with the aim of finding all randomized controlled trials of statins carried out between January 1999 and May 2005 (reference lists also were searched). Only trials which compared one statin to another statin or one statin to another type of drug were included. The researchers extracted the following information from each article: the study's source of funding, aspects of study design, the overall results, and the authors' conclusions. The results were categorized to show whether the findings were favorable to the test drug (the newer statin), inconclusive, or not favorable to the test drug. Aspects of each study's design were also categorized in relation to various features, such as how well the randomization was done (in particular, the degree to which the processes used would have prevented physicians from knowing which treatment a patient was likely to receive on enrollment); whether all participants enrolled in the trial were eventually analyzed; and whether investigators or participants knew what treatment an individual was receiving.
One hundred and ninety-two trials were included in this study, and of these, 95 declared drug company funding; 23 declared government or other nonprofit funding while 74 did not declare funding or were not funded. Trials that were properly blinded (where participants and investigators did not know what treatment an individual received) were less likely to have conclusions favoring the test drug. However, large trials were more likely to favor the test drug than smaller trials. When looking specifically at the trials funded by drug companies, the researchers found various factors that predicted whether a result or conclusion favored the test drug. These included the impact of the journal publishing the results; the size of the trial; and whether funding came from the maker of the test drug. However, properly blinded trials were less likely to produce results favoring the test drug. Even once all other factors were accounted for, the funding source for the study was still linked with results and conclusions that favored the maker of the test drug.
What Do These Findings Mean?
This study shows that the type of sponsorship available for randomized controlled trials of statins was strongly linked to the results and conclusions of those studies, even when other factors were taken into account. However, it is not clear from this study why sponsorship has such a strong link to the overall findings. There are many possible reasons why this might be. Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial. Others have suggested that trials which produce unfavorable results are not published, or that unfavorable outcomes are suppressed. Whatever the reasons for these findings, the implications are important, and suggest that the evidence base relating to statins may be substantially biased.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040184.
The James Lind Library has been created to help people understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
The International Committee of Medical Journal Editors has provided guidance regarding sponsorship, authorship, and accountability
The CONSORT statement is a research tool that provides an evidence-based approach for reporting the results of randomized controlled trials
Good Publication Practice guidelines provide standards for responsible publication of research sponsored by pharmaceutical companies
Information from Wikipedia on Statins. Wikipedia is an internet encyclopedia anyone can edit
doi:10.1371/journal.pmed.0040184
PMCID: PMC1885451  PMID: 17550302
13.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
14.  Journal publications by Australian chiropractic academics: are they enough? 
Purpose
To document the number of journal publications attributed to the academic faculty of Australian chiropractic tertiary institutions. To provide a discussion of the significance of this output and to relate this to the difficulty the profession appears to be experiencing in the uptake of evidence based healthcare outcomes and cultures.
Methods
The departmental websites for the three Australian chiropractic tertiary institutions were accessed and a list of academic faculty compiled. It was noted whether each academic held a chiropractic qualification or research Doctoral (not professional) degree qualification A review of the literature was conducted using the names of the academics and cross-referencing to publications listed independently in the PubMed and Index to Chiropractic Literature (ICL) databases (from inception to February 27 2006). Publications were excluded that were duplicates, corrected reprints, conference abstracts/proceedings, books, monographs, letters to the editor/comments or editorials. Using this information an annual and recent publication rate was constructed.
Results
For the 41 academics there was a total of 155 PubMed listed publications (mean 3.8, annual rate per academic 0.31) and 415 ICL listed publications (mean 10.1, annual rate 0.62). Over the last five years there have been 50 PubMed listed publications (mean 1.2, annual rate 0.24) and 97 ICL listed publications (mean 2.4, annual rate 0.47). Chiropractor academics (n = 31) had 29 PubMed listed publications (mean 2.5, annual rate 0.27) and 265 ICL listed publications (mean 8.5, annual rate 0.57). Academics with a doctoral degree (n = 13) had 134 PubMed listed publications (mean 10.3, annual rate 0.70) and 311 ICL listed publications (mean 23.9, annual rate 1.44). Academics without a Doctoral degree (n = 28) had 21 PubMed listed publications (mean 0.8, annual rate 0.13) and 104 ICL listed publications (mean 3.7, annual rate 0.24).
Conclusion
While several academics have compiled an impressive list of publications, overall there is a significant paucity of published research authored by the majority of academics, with a trend for a falling recent publication rate and not having a doctoral degree being a risk factor for poor publication productivity. It is suggested that there is an urgent necessity to facilitate the acquisition of research skills in academic staff particularly in research methods and publication skills. Only when undergraduate students are exposed to an institutional environment conducive to and fostering research will concepts of evidence based healthcare really be appreciated and implemented by the profession.
doi:10.1186/1746-1340-14-13
PMCID: PMC1559708  PMID: 16872544
15.  Public accessibility of biomedical articles from PubMed Central reduces journal readership—retrospective cohort analysis 
The FASEB Journal  2013;27(7):2536-2541.
Does PubMed Central—a government-run digital archive of biomedical articles—compete with scientific society journals? A longitudinal, retrospective cohort analysis of 13,223 articles (5999 treatment, 7224 control) published in 14 society-run biomedical research journals in nutrition, experimental biology, physiology, and radiology between February 2008 and January 2011 reveals a 21.4% reduction in full-text hypertext markup language (HTML) article downloads and a 13.8% reduction in portable document format (PDF) article downloads from the journals' websites when U.S. National Institutes of Health-sponsored articles (treatment) become freely available from the PubMed Central repository. In addition, the effect of PubMed Central on reducing PDF article downloads is increasing over time, growing at a rate of 1.6% per year. There was no longitudinal effect for full-text HTML downloads. While PubMed Central may be providing complementary access to readers traditionally underserved by scientific journals, the loss of article readership from the journal website may weaken the ability of the journal to build communities of interest around research papers, impede the communication of news and events to scientific society members and journal readers, and reduce the perceived value of the journal to institutional subscribers.—Davis, P. M. Public accessibility of biomedical articles from PubMed Central reduces journal readership—retrospective cohort analysis.
doi:10.1096/fj.13-229922
PMCID: PMC3688741  PMID: 23554455
digital repositories; downloads; open access; scientific publishing
16.  Open access publishing: a study of current practices in orthopaedic research 
International Orthopaedics  2014;38(6):1297-1302.
Background
Open access (OA) publications have changed the paradigm of dissemination of scientific research. Their benefits to low-income countries underline their value; however, critics question exorbitant publication fees as well as their effect on the peer review process and research quality.
Purpose
This study reports on the prevalence of OA publishing in orthopaedic research and compares benchmark citation indices as well as evidence quality derived from OA journals with conventional subscription based orthopaedic journals.
Methods
All 63 orthopaedic journals listed in ISI’s Web of Knowledge Journal Citation Report (JCR) were examined. Bibliometric data attributed to each journal for the year 2012 was acquired from the JCR. Studies that fulfilled the criteria of level I evidence were identified for each journal within PubMed. Individual journal websites were reviewed to identify their open access policy. A total of 38 (60.3 %) journals did not offer any form of OA publishing; however, 20 (31.7 %) hybrid journals were identified which offered authors the choice to publish their work as OA if a publication fee was paid. Only five (8 %) journals published all their articles as OA. There was variability amongst the different publication fees for OA articles. Journals that published OA articles did not differ from subscription based journals on the basis of 2012 impact factor, citation number, self citation proportion or the volume of level I evidence published (p > 0.05).
Conclusions
OA journals are present in orthopaedic research, though in small numbers. Over a third of orthopaedic journals catalogued in the ISI Web of Knowledge JCR® are hybrid journals that provide authors with the opportunity to publish their articles as OA after a publication fee is paid. This study suggests equivalent importance and quality of articles between OA and subscription based orthopaedic journals based on bibliometric data and the volume of level I evidence produced. Orthopaedic researchers must recognize the potential benefits of OA publishing and its emerging presence within the field. Further examination and consensus is required in orthopaedic research to generate an OA system that is robustly regulated and maintains research quality.
doi:10.1007/s00264-013-2250-5
PMCID: PMC4037500  PMID: 24384939
Open access journals; Scientific research
17.  On the persistence of supplementary resources in biomedical publications 
BMC Bioinformatics  2006;7:260.
Background
Providing for long-term and consistent public access to scientific data is a growing concern in biomedical research. One aspect of this problem can be demonstrated by evaluating the persistence of supplementary data associated with published biomedical papers.
Methods
We manually evaluated 655 supplementary data links extracted from PubMed abstracts published 1998–2005 (Method 1) as well as a further focused subset of 162 full-text manuscripts published within three representative high-impact biomedical journals between September and December 2004 (Method 2).
Results
For Method 1 we found that since 2001, only 71 – 92% of supplementary data were still accessible via the links provided, with 93% of these inaccessible links occurring where supplementary data was not stored with the publishing journal. Of the manuscripts evaluated in Method 2, we found that only 83% of these links were available approximately a year after publication, with 55% of these inaccessible links were at locations outside the journal of publication.
Conclusion
We conclude that if supplemental data is required to support the publication, journals policies must take-on the responsibility to accept and store such data or require that it be maintained with a credible independent institution or under the terms of a strategic data storage plan specified by the authors. We further recommend that publishers provide automated systems to ensure that supplementary links remain persistent, and that granting bodies such as the NIH develop policies and funding mechanisms to maintain long-term persistent access to these data.
doi:10.1186/1471-2105-7-260
PMCID: PMC1481620  PMID: 16712726
18.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570
19.  Annals of General Psychiatry 
Our regular readers will notice that the title of our journal has changed from Annals of General Hospital Psychiatry (AGHP) to Annals of General Psychiatry (AGP) since January 1st, 2005. This was judged as necessary, in order to be able to serve better the aims of the journal. Our initial thoughts were that including the term 'General Hospital' in the journal's title would help us to launch a journal dedicated to the idea of Psychiatry as a medical specialty. But they were not justified; so, now the Annals of General Psychiatry (AGP) is born! It is still an Open Access, peer-reviewed, online journal covering the wider field of Psychiatry, Neurosciences and Psychological Medicine, and aims at publishing articles on all aspects of psychiatry. Primary research articles are the journal's priority, and both basic and clinical neuroscience contributions are encouraged. The AGP strongly supports and follows the principles of evidence-based medicine. AGP's articles are archived in PubMed Central, the US National Library of Medicine's full-text repository of life science literature, and also in repositories at the University of Potsdam in Germany, at INIST in France and in e-Depot, the National Library of the Netherlands' digital archive of all electronic publications. We hope that the change in the journal's name will cure the confusion caused by its previous title and help to achieve the journal's aims and scope, that is to help the world-wide promotion of research and publishing in the mental health area.
doi:10.1186/1744-859X-4-3
PMCID: PMC1088009  PMID: 15845139
20.  How Complementary and Alternative Medicine Practitioners Use PubMed 
Background
PubMed is the largest bibliographic index in the life sciences. It is freely available online and is used by professionals and the public to learn more about medical research. While primarily intended to serve researchers, PubMed provides an array of tools and services that can help a wider readership in the location, comprehension, evaluation, and utilization of medical research.
Objective
This study sought to establish the potential contributions made by a range of PubMed tools and services to the use of the database by complementary and alternative medicine practitioners.
Methods
In this study, 10 chiropractors, 7 registered massage therapists, and a homeopath (N = 18), 11 with prior research training and 7 without, were taken through a 2-hour introductory session with PubMed. The 10 PubMed tools and services considered in this study can be divided into three functions: (1) information retrieval (Boolean Search, Limits, Related Articles, Author Links, MeSH), (2) information access (Publisher Link, LinkOut, Bookshelf ), and (3) information management (History, Send To, Email Alert). Participants were introduced to between six and 10 of these tools and services. The participants were asked to provide feedback on the value of each tool or service in terms of their information needs, which was ranked as positive, positive with emphasis, negative, or indifferent.
Results
The participants in this study expressed an interest in the three types of PubMed tools and services (information retrieval, access, and management), with less well-regarded tools including MeSH Database and Bookshelf. In terms of their comprehension of the research, the tools and services led the participants to reflect on their understanding as well as their critical reading and use of the research. There was universal support among the participants for greater access to complete articles, beyond the approximately 15% that are currently open access. The abstracts provided by PubMed were felt to be necessary in selecting literature to read but entirely inadequate for both evaluating and learning from the research. Thus, the restrictions and fees the participants faced in accessing full-text articles were points of frustration.
Conclusions
The study found strong indications of PubMed’s potential value in the professional development of these complementary and alternative medicine practitioners in terms of engaging with and understanding research. It provides support for the various initiatives intended to increase access, including a recommendation that the National Library of Medicine tap into the published research that is being archived by authors in institutional archives and through other websites.
doi:10.2196/jmir.9.2.e19
PMCID: PMC1913941  PMID: 17613489
PubMed; research dissemination; complementary and alternative medicine; open access; professional development; information retrieval; information management; literacy
21.  Self-correction in biomedical publications and the scientific impact 
Croatian Medical Journal  2014;55(1):61-72.
Aim
To analyze mistakes and misconduct in multidisciplinary and specialized biomedical journals.
Methods
We conducted searches through PubMed to retrieve errata, duplicate, and retracted publications (as of January 30, 2014). To analyze publication activity and citation profiles of countries, multidisciplinary, and specialized biomedical journals, we referred to the latest data from the SCImago Journal & Country Rank database. Total number of indexed articles and values of the h-index of the fifty most productive countries and multidisciplinary journals were recorded and linked to the number of duplicate and retracted publications in PubMed.
Results
Our analysis found 2597 correction items. A striking increase in the number of corrections appeared in 2013, which is mainly due to 871 (85.3%) corrections from PLOS One. The number of duplicate publications was 1086. Articles frequently published in duplicate were reviews (15.6%), original studies (12.6%), and case reports (7.6%), whereas top three retracted articles were original studies (10.1%), randomized trials (8.8%), and reviews (7%). A strong association existed between the total number of publications across countries and duplicate (rs = 0.86, P < 0.001) and retracted items (rs = 0.812, P < 0.001). A similar trend was found between country-based h-index values and duplicate and retracted publications.
Conclusion
The study suggests that the intensified self-correction in biomedicine is due to the attention of readers and authors, who spot errors in their hub of evidence-based information. Digitization and open access confound the staggering increase in correction notices and retractions.
doi:10.3325/cmj.2014.55.61
PMCID: PMC3944419  PMID: 24577829
22.  Data reuse and the open data citation advantage 
PeerJ  2013;1:e175.
Background. Attribution to the original contributor upon reuse of published data is important both as a reward for data creators and to document the provenance of research findings. Previous studies have found that papers with publicly available datasets receive a higher number of citations than similar studies without available data. However, few previous analyses have had the statistical power to control for the many variables known to predict citation rate, which has led to uncertain estimates of the “citation benefit”. Furthermore, little is known about patterns in data reuse over time and across datasets.
Method and Results. Here, we look at citation rates while controlling for many known citation predictors and investigate the variability of data reuse. In a multivariate regression on 10,555 studies that created gene expression microarray data, we found that studies that made data available in a public repository received 9% (95% confidence interval: 5% to 13%) more citations than similar studies for which the data was not made available. Date of publication, journal impact factor, open access status, number of authors, first and last author publication history, corresponding author country, institution citation history, and study topic were included as covariates. The citation benefit varied with date of dataset deposition: a citation benefit was most clear for papers published in 2004 and 2005, at about 30%. Authors published most papers using their own datasets within two years of their first publication on the dataset, whereas data reuse papers published by third-party investigators continued to accumulate for at least six years. To study patterns of data reuse directly, we compiled 9,724 instances of third party data reuse via mention of GEO or ArrayExpress accession numbers in the full text of papers. The level of third-party data use was high: for 100 datasets deposited in year 0, we estimated that 40 papers in PubMed reused a dataset by year 2, 100 by year 4, and more than 150 data reuse papers had been published by year 5. Data reuse was distributed across a broad base of datasets: a very conservative estimate found that 20% of the datasets deposited between 2003 and 2007 had been reused at least once by third parties.
Conclusion. After accounting for other factors affecting citation rate, we find a robust citation benefit from open data, although a smaller one than previously reported. We conclude there is a direct effect of third-party data reuse that persists for years beyond the time when researchers have published most of the papers reusing their own data. Other factors that may also contribute to the citation benefit are considered. We further conclude that, at least for gene expression microarray data, a substantial fraction of archived datasets are reused, and that the intensity of dataset reuse has been steadily increasing since 2003.
doi:10.7717/peerj.175
PMCID: PMC3792178  PMID: 24109559
Data reuse; Data repositories; Gene expression microarray; Incentives; Data archiving; Open data; Bibliometrics; Information science
23.  Promotional Tone in Reviews of Menopausal Hormone Therapy After the Women's Health Initiative: An Analysis of Published Articles 
PLoS Medicine  2011;8(3):e1000425.
Adriane Fugh-Berman and colleagues analyzed a selection of published opinion pieces on hormone therapy and show that there may be a connection between receiving industry funding for speaking, consulting, or research and the tone of such opinion pieces.
Background
Even after the Women's Health Initiative (WHI) found that the risks of menopausal hormone therapy (hormone therapy) outweighed benefit for asymptomatic women, about half of gynecologists in the United States continued to believe that hormones benefited women's health. The pharmaceutical industry has supported publication of articles in medical journals for marketing purposes. It is unknown whether author relationships with industry affect promotional tone in articles on hormone therapy. The goal of this study was to determine whether promotional tone could be identified in narrative review articles regarding menopausal hormone therapy and whether articles identified as promotional were more likely to have been authored by those with conflicts of interest with manufacturers of menopausal hormone therapy.
Methods and Findings
We analyzed tone in opinion pieces on hormone therapy published in the four years after the estrogen-progestin arm of the WHI was stopped. First, we identified the ten authors with four or more MEDLINE-indexed reviews, editorials, comments, or letters on hormone replacement therapy or menopausal hormone therapy published between July 2002 and June 2006. Next, we conducted an additional search using the names of these authors to identify other relevant articles. Finally, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. Scientific accuracy was assessed based on whether or not the findings of the WHI were accurately reported using two criteria: (1) Acknowledgment or lack of denial of the risk of breast cancer diagnosis associated with hormone therapy, and (2) acknowledgment that hormone therapy did not benefit cardiovascular disease endpoints. Determination of promotional tone was based on the assessment by each reader of whether the article appeared to promote hormone therapy. Analysis of inter-rater consistency found moderate agreement for scientific accuracy (κ = 0.57) and substantial agreement for promotional tone (κ = 0.65). After discussion, readers found 86% of the articles to be scientifically accurate and 64% to be promotional in tone. Themes that were common in articles considered promotional included attacks on the methodology of the WHI, arguments that clinical trial results should not guide treatment for individuals, and arguments that observational studies are as good as or better than randomized clinical trials for guiding clinical decisions. The promotional articles we identified also implied that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Of the ten authors studied, eight were found to have declared payment for speaking or consulting on behalf of menopausal hormone manufacturers or for research support (seven of these eight were speakers or consultants). Thirty of 32 articles (90%) evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest, compared to 11 of 18 articles (61%) by those without such conflicts (p = 0.0025). Articles promoting the use of menopausal hormone therapy were 2.41 times (95% confidence interval 1.49–4.93) as likely to have been authored by authors with conflicts of interest as by authors without conflicts of interest. In articles from three authors with conflicts of interest some of the same text was repeated word-for-word in different articles.
Conclusion
There may be a connection between receiving industry funding for speaking, consulting, or research and the publication of promotional opinion pieces on menopausal hormone therapy.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Over the past three decades, menopausal hormones have been heavily promoted for preventing disease in women. However, the Women's Health Initiative (WHI) study—which enrolled more than 26,000 women in the US and which was published in 2004—found that estrogen-progestin and estrogen-only formulations (often prescribed to women around the age of menopause) increased the risk of stroke, deep vein thrombosis, dementia, and incontinence. Furthermore, this study found that the estrogen-progestin therapy increased rates of breast cancer. In fact, the estrogen-progestin arm of the WHI study was stopped in 2002 due to harmful findings, and the estrogen-only arm was stopped in 2004, also because of harmful findings. In addition, the study also found that neither therapy reduced cardiovascular risk or markedly benefited health-related quality of life measures.
Despite these results, two years after the results of WHI study were published, a survey of over 700 practicing gynecologists—the specialists who prescribe the majority of menopausal hormone therapies—in the US found that almost half did not find the findings of the WHI study convincing and that 48% disagreed with the decision to stop the trial early. Furthermore, follow-up surveys found similar results.
Why Was This Study Done?
It is unclear why gynecologists and other physicians continue to prescribe menopausal hormone therapies despite the results of the WHI. Some academics argue that published industry-funded reviews and commentaries may be designed to convey specific, but subtle, marketing messages and several academic analyses have used internal industry documents disclosed in litigation cases. So this study was conducted to investigate whether hormone therapy–promoting tone could be identified in narrative review articles and if so, whether these articles were more likely to have been authored by people who had accepted funding from hormone manufacturers.
What Did the Researchers Do and Find?
The researchers conducted a comprehensive literature search that identified 340 relevant articles published between July 2002 and June 2006—the four years following the cessation of the estrogen-progestin arm of the women's health initiative study. Ten authors had published four to six articles, 47 authored two or three articles, and 371 authored one article each. The researchers focused on authors who had published four or more articles in the four-year period under study and, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. After individually analyzing a batch of articles, the readers met to provide their initial assessments, to discuss them, and to reach consensus on tone and scientific accuracy. Then after the papers were evaluated, each author was identified and the researchers searched for authors' potential financial conflicts of interest, defined as publicly disclosed information that the authors had received payment for research, speaking, or consulting on behalf of a manufacturer of menopausal hormone therapy.
Common themes in the 50 articles included arguments that clinical trial results should not guide treatment for individuals and suggestions that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Furthermore, of the ten authors studied, eight were found to have received payment for research, speaking or consulting on behalf of menopause hormone manufacturers, and 30 of 32 articles evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest. Articles promoting the use of menopausal hormone therapy were more than twice as likely to have been written by authors with conflicts of interest as by authors without conflicts of interest. Furthermore, Three authors who were identified as having financial conflicts of interest were authors on articles where sections of their previously published articles were repeated word-for-word without citation.
What Do These Findings Mean?
The findings of this study suggest that there may be a link between receiving industry funding for speaking, consulting, or research and the publication of apparently promotional opinion pieces on menopausal hormone therapy. Furthermore, such publications may encourage physicians to continue prescribing these therapies to women of menopausal age. Therefore, physicians and other health care providers should interpret the content of review articles with caution. In addition, medical journals should follow the International Committee of Medical Journal Editors Uniform Requirements for Manuscripts, which require that all authors submit signed statements of their participation in authorship and full disclosure of any conflicts of interest.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000425.
The US National Heart, Lung, and Blood Institute has more information on the Womens Health Initiative
The US National Institutes of Health provide more information about the effects of menopausal hormone replacement therapy
The Office of Women's Health, U.S. Department of Health and Human Services provides information on menopausal hormone therapy
The International Committee of Medical Journal Editors Uniform Requirements for Manuscripts presents Uniform Requirements for Manuscripts published in biomedical journals
The National Womens Health Network, a consumer advocacy group that takes no industry money, has factsheets and articles about menopausal hormone therapy
PharmedOut, a Georgetown University Medical Center project, has many resources on pharmaceutical marketing practices
doi:10.1371/journal.pmed.1000425
PMCID: PMC3058057  PMID: 21423581
24.  Availability of renal literature in six bibliographic databases 
Clinical Kidney Journal  2012;5(6):610-617.
Background
When searching for renal literature, nephrologists must choose between several different bibliographic databases. We compared the availability of renal clinical studies in six major bibliographic databases.
Methods
We gathered 151 renal systematic reviews, which collectively contained 2195 unique citations referencing primary studies in the form of journal articles, meeting articles or meeting abstracts published between 1963 and 2008. We searched for each citation in three subscription-free bibliographic databases (PubMed, Google Scholar and Scirus) and three subscription-based databases (EMBASE, Ovid-MEDLINE and ISI Web of Knowledge). For the subscription-free databases, we determined which full-text journal articles were available free of charge via links to the article source.
Results
The proportion of journal articles contained within each of the six databases ranged from 96 to 97%; results were similar for meeting articles. Availability of meeting abstracts was poor, ranging from 0 to 37% (P < 0.01) with ISI Web of Knowledge containing the largest proportion [37%, 95% confidence interval (95% CI) 32–43%]. Among the subscription-free databases, free access to full-text articles was highest in Google Scholar (38% free, 95% CI 36–41%), and was only marginally higher (39%) when all subscription-free databases were searched. After 2000, free access to full-text articles increased to 49%.
Conclusions
Over 99% of renal clinical journal articles are available in at least one major bibliographic database. Subscription-free databases provide free full-text access to almost half of the articles published after the year 2000, which may be of particular interest to clinicians in settings with limited access to subscription-based resources.
doi:10.1093/ckj/sfs152
PMCID: PMC3506156  PMID: 23185693
bibliographic databases; content coverage; evidence-based medicine; information storage and retrieval; literature searching; renal informatics
25.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Background
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
Background
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001308.
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
doi:10.1371/journal.pmed.1001308
PMCID: PMC3439420  PMID: 22984354

Results 1-25 (540648)