PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1150413)

Clipboard (0)
None

Related Articles

1.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
2.  Conflict of Interest Reporting by Authors Involved in Promotion of Off-Label Drug Use: An Analysis of Journal Disclosures 
PLoS Medicine  2012;9(8):e1001280.
Aaron Kesselheim and colleagues investigate conflict of interest disclosures in articles authored by physicians and scientists identified in whistleblower complaints alleging illegal off-label marketing by pharmaceutical companies.
Background
Litigation documents reveal that pharmaceutical companies have paid physicians to promote off-label uses of their products through a number of different avenues. It is unknown whether physicians and scientists who have such conflicts of interest adequately disclose such relationships in the scientific publications they author.
Methods and Findings
We collected whistleblower complaints alleging illegal off-label marketing from the US Department of Justice and other publicly available sources (date range: 1996–2010). We identified physicians and scientists described in the complaints as having financial relationships with defendant manufacturers, then searched Medline for articles they authored in the subsequent three years. We assessed disclosures made in articles related to the off-label use in question, determined the frequency of adequate disclosure statements, and analyzed characteristics of the authors (specialty, author position) and articles (type, connection to off-label use, journal impact factor, citation count/year). We identified 39 conflicted individuals in whistleblower complaints. They published 404 articles related to the drugs at issue in the whistleblower complaints, only 62 (15%) of which contained an adequate disclosure statement. Most articles had no disclosure (43%) or did not mention the pharmaceutical company (40%). Adequate disclosure rates varied significantly by article type, with commentaries less likely to have adequate disclosure compared to articles reporting original studies or trials (adjusted odds ratio [OR] = 0.10, 95%CI = 0.02–0.67, p = 0.02). Over half of the authors (22/39, 56%) made no adequate disclosures in their articles. However, four of six authors with ≥25 articles disclosed in about one-third of articles (range: 10/36–8/25 [28%–32%]).
Conclusions
One in seven authors identified in whistleblower complaints as involved in off-label marketing activities adequately disclosed their conflict of interest in subsequent journal publications. This is a much lower rate of adequate disclosure than has been identified in previous studies. The non-disclosure patterns suggest shortcomings with authors and the rigor of journal practices.
Please see later in the article for the Editors' Summary
Editor's Summary
Background
Off-label use of pharmaceuticals is the practice of prescribing a drug for a condition or age group, or in a dose or form of administration, that has not been specifically approved by a formal regulatory body, such as the US Food and Drug Administration (FDA). Off-label prescribing is common all over the world. In the US, although it is legal for doctors to prescribe drugs off-label and discuss such clinical uses with colleagues, it is illegal for pharmaceutical companies to directly promote off-label uses of any of their products. Revenue from off-label uses can be lucrative for drug companies and even surpass the income from approved uses. Therefore, many pharmaceutical companies have paid physicians and scientists to promote off-label use of their products as part of their marketing programs.
Why Was This Study Done?
Recently, a number of pharmaceutical companies have been investigated in the US for illegal marketing programs that promote off-label uses of their products and have had to pay billions of dollars in court settlements. As part of these investigations, doctors and scientists were identified who were paid by the companies to deliver lectures and conduct other activities to support off-label uses. When the same physicians and scientists also wrote articles about these drugs for medical journals, their financial relationships would have constituted clear conflicts of interest that should have been declared alongside the journal articles. So, in this study, the researchers identified such authors, examined their publications, and assessed the adequacy of conflict of interest disclosures made in these publications.
What Did the Researchers Do and Find?
The researchers used disclosed information from the US Department of Justice, media reports, and data from a non-governmental organization that tracks federal fraud actions, to find whistleblower complaints alleging illegal off-label promotion. Then they identified the doctors and scientists described in the complaints as having financial relationships with the defendant drug companies and searched Medline for articles authored by these experts in the subsequent three years. Using a four step approach, the researchers assessed the adequacy of conflict of interest disclosures made in articles relating to the off-label uses in question.
Using these methods, the researchers examined 26 complaints alleging illegal off-label promotion and identified the 91 doctors and scientists recorded as being involved in this practice. The researchers found 39 (43%) of these 91 experts had authored 404 related publications. In the complaints, these 39 experts were alleged to have engaged in 42 relationships with the relevant drug company: the most common activity was acting as a paid speaker (n = 26, 62%) but also writing reviews or articles on behalf of the company (n = 7), acting as consultants or advisory board members (n = 3), and receiving gifts/honoraria (n = 3), research support funds (n = 2), and educational support funds (n = 1). However, the researchers found that only 62 (15%) of the 404 related articles had adequate disclosures—43% (148) had no disclosure at all, 4% had statements denying any conflicts of interest, 40% had disclosures that did not mention the drug manufacturer, and 13% had disclosures that mentioned the manufacturer but inadequately conveyed the nature of the relationship between author and drug manufacturer reported in the complaint. The researchers also found that adequate disclosure rates varied significantly by article type, with commentaries significantly less likely to have adequate disclosure compared to articles reporting studies or trials.
What Do These Findings Mean?
These findings show the substantial deficiencies in the adequacy of conflict-of-interest disclosures made by authors who had been paid by pharmaceutical manufacturers as part of off-label marketing activities: only one in seven authors fully disclosed their conflict of interest in their published articles. This low figure is troubling and suggests that approaches to controlling the effects of conflicts of interest that rely on author candidness are inadequate and furthermore, journal practices are not robust enough and need to be improved. In the meantime, readers have no option but to interpret conflict of interest disclosures, particularly in relation to off-label uses, with caution.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001280.
The US FDA provides a guide on the use of off-label drugs
The US Agency for Healthcare Research and Quality offers a patient guide to off-label drugs
ProPublica offers a web-based tool to identify physicians who have financial relationships with certain pharmaceutical companies
Wikipedia has a good description of off-label drug use (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The Institute for Medicine as a Profession maintains a list of policies regulating physicians' financial relationships that are in place at US-based academic medical centers
doi:10.1371/journal.pmed.1001280
PMCID: PMC3413710  PMID: 22899894
3.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
4.  Conflict of Interest Disclosure Policies and Practices in Peer-reviewed Biomedical Journals 
Journal of General Internal Medicine  2006;21(12):1248-1252.
OBJECTIVE
We undertook this investigation to characterize conflict of interest (COI) policies of biomedical journals with respect to authors, peer-reviewers, and editors, and to ascertain what information about COI disclosures is publicly available.
METHODS
We performed a cross-sectional survey of a convenience sample of 135 editors of peer-reviewed biomedical journals that publish original research. We chose an international selection of general and specialty medical journals that publish in English. Selection was based on journal impact factor, and the recommendations of experts in the field. We developed and pilot tested a 3-part web-based survey. The survey included questions about the presence of specific policies for authors, peer-reviewers, and editors, specific restrictions on authors, peer-reviewers, and editors based on COI, and the public availability of these disclosures. Editors were contacted a minimum of 3 times.
RESULTS
The response rate for the survey was 91 (67%) of 135, and 85 (93%) of 91 journals reported having an author COI policy. Ten (11%) journals reported that they restrict author submissions based on COI (e.g., drug company authors' papers on their products are not accepted). While 77% report collecting COI information on all author submissions, only 57% publish all author disclosures. A minority of journals report having a specific policy on peer-reviewer 46% (42/91) or editor COI 40% (36/91); among these, 25% and 31% of journals state that they require recusal of peer-reviewers and editors if they report a COI. Only 3% of respondents publish COI disclosures of peer-reviewers, and 12% publish editor COI disclosures, while 11% and 24%, respectively, reported that this information is available upon request.
CONCLUSION
Many more journals have a policy regarding COI for authors than they do for peer-reviewers or editors. Even author COI policies are variable, depending on the type of manuscript submitted. The COI information that is collected by journals is often not published; the extent to which such “secret disclosure” may impact the integrity of the journal or the published work is not known.
doi:10.1111/j.1525-1497.2006.00598.x
PMCID: PMC1924760  PMID: 17105524
conflict of interest; disclosure; peer-review; editorial policy
5.  Empirical Study of Data Sharing by Authors Publishing in PLoS Journals 
PLoS ONE  2009;4(9):e7078.
Background
Many journals now require authors share their data with other investigators, either by depositing the data in a public repository or making it freely available upon request. These policies are explicit, but remain largely untested. We sought to determine how well authors comply with such policies by requesting data from authors who had published in one of two journals with clear data sharing policies.
Methods and Findings
We requested data from ten investigators who had published in either PLoS Medicine or PLoS Clinical Trials. All responses were carefully documented. In the event that we were refused data, we reminded authors of the journal's data sharing guidelines. If we did not receive a response to our initial request, a second request was made. Following the ten requests for raw data, three investigators did not respond, four authors responded and refused to share their data, two email addresses were no longer valid, and one author requested further details. A reminder of PLoS's explicit requirement that authors share data did not change the reply from the four authors who initially refused. Only one author sent an original data set.
Conclusions
We received only one of ten raw data sets requested. This suggests that journal policies requiring data sharing do not lead to authors making their data sets available to independent investigators.
doi:10.1371/journal.pone.0007078
PMCID: PMC2739314  PMID: 19763261
6.  Are pediatric Open Access journals promoting good publication practice? An analysis of author instructions 
BMC Pediatrics  2011;11:27.
Background
Several studies analyzed whether conventional journals in general medicine or specialties such as pediatrics endorse recommendations aiming to improve publication practice. Despite evidence showing benefits of these recommendations, the proportion of endorsing journals has been moderate to low and varied considerably for different recommendations. About half of pediatric journals indexed in the Journal Citation Report referred to the Uniform Requirements for Manuscripts of the International Committee of Medical Journal Editors (ICMJE) but only about a quarter recommended registration of trials. We aimed to investigate to what extent pediatric open-access (OA) journals endorse these recommendations. We hypothesized that a high proportion of these journals have adopted recommendations on good publication practice since OA electronic publishing has been associated with a number of editorial innovations aiming at improved access and transparency.
Methods
We identified 41 journals publishing original research in the subject category "Health Sciences, Medicine (General), Pediatrics" of the Directory of Open Access Journals http://www.doaj.org. From the journals' online author instructions we extracted information regarding endorsement of four domains of editorial policy: the Uniform Requirements for Manuscripts, trial registration, disclosure of conflicts of interest and five major reporting guidelines such as the CONSORT (Consolidated Standards of Reporting Trials) statement. Two investigators collected data independently.
Results
The Uniform Requirements were mentioned by 27 (66%) pediatric OA journals. Thirteen (32%) required or recommended trial registration prior to publication of a trial report. Conflict of interest policies were stated by 25 journals (61%). Advice about reporting guidelines was less frequent: CONSORT was referred to by 12 journals (29%) followed by other reporting guidelines (MOOSE, PRISMA or STARD) (8 journals, 20%) and STROBE (3 journals, 7%). The EQUATOR network, a platform of several guideline initiatives, was acknowledged by 4 journals (10%).
Journals published by OA publishing houses gave more guidance than journals published by professional societies or other publishers.
Conclusions
Pediatric OA journals mentioned certain recommendations such as the Uniform Requirements or trial registration more frequently than conventional journals; however, endorsement is still only moderate. Further research should confirm these exploratory findings in other medical fields and should clarify what the motivations and barriers are in implementing such policies.
doi:10.1186/1471-2431-11-27
PMCID: PMC3084157  PMID: 21477335
7.  Endorsement of the CONSORT Statement by high impact factor medical journals: a survey of journal editors and journal 'Instructions to Authors' 
Trials  2008;9:20.
Background
The CONSORT Statement provides recommendations for reporting randomized controlled trials. We assessed the extent to which leading medical journals that publish reports of randomized trials incorporate the CONSORT recommendations into their journal and editorial processes.
Methods
This article reports on two observational studies. Study 1: We examined the online version of 'Instructions to Authors' for 165 high impact factor medical journals and extracted all text mentioning the CONSORT Statement or CONSORT extension papers. Any mention of the International Committee of Medical Journal Editors (ICMJE) or clinical trial registration were also sought and extracted. Study 2: We surveyed the editor-in-chief, or editorial office, for each of the 165 journals about their journal's endorsement of CONSORT recommendations and its incorporation into their editorial and peer-review processes.
Results
Study 1: Thirty-eight percent (62/165) of journals mentioned the CONSORT Statement in their online 'Instructions to Authors'; of these 37% (23/62) stated this was a requirement, 63% (39/62) were less clear in their recommendations. Very few journals mentioned the CONSORT extension papers. Journals that referred to CONSORT were more likely to refer to ICMJE guidelines (RR 2.16; 95% CI 1.51 to 3.08) and clinical trial registration (RR 3.67; 95% CI 2.36 to 5.71) than those journals which did not.
Study 2: Thirty-nine percent (64/165) of journals responded to the on-line survey, the majority were journal editors. Eighty-eight percent (50/57) of journals recommended authors comply with the CONSORT Statement; 62% (35/56) said they would require this. Forty-one percent (22/53) reported incorporating CONSORT into their peer-review process and 47% (25/53) into their editorial process. Eighty-one percent (47/58) reported including CONSORT in their 'Instructions to Authors' although there was some inconsistency when cross checking information on the journal's website. Sixty-nine percent (31/45) of journals recommended authors comply with the CONSORT extension for cluster trials, 60% (27/45) for harms and 42% (19/45) for non-inferiority and equivalence trials. Few journals mentioned these extensions in their 'Instructions to Authors'.
Conclusion
Journals should be more explicit in their recommendations and expectations of authors regarding the CONSORT Statement and related CONSORT extensions papers.
doi:10.1186/1745-6215-9-20
PMCID: PMC2359733  PMID: 18423021
8.  Misconduct Policies in High-Impact Biomedical Journals 
PLoS ONE  2012;7(12):e51928.
Background
It is not clear which research misconduct policies are adopted by biomedical journals. This study assessed the prevalence and content policies of the most influential biomedical journals on misconduct and procedures for handling and responding to allegations of misconduct.
Methods
We conducted a cross-sectional study of misconduct policies of 399 high-impact biomedical journals in 27 biomedical categories of the Journal Citation Reports in December 2011. Journal websites were reviewed for information relevant to misconduct policies.
Results
Of 399 journals, 140 (35.1%) provided explicit definitions of misconduct. Falsification was explicitly mentioned by 113 (28.3%) journals, fabrication by 104 (26.1%), plagiarism by 224 (56.1%), duplication by 242 (60.7%) and image manipulation by 154 (38.6%). Procedures for responding to misconduct were described in 179 (44.9%) websites, including retraction, (30.8%) and expression of concern (16.3%). Plagiarism-checking services were used by 112 (28.1%) journals. The prevalences of all types of misconduct policies were higher in journals that endorsed any policy from editors’ associations, Office of Research Integrity or professional societies compared to those that did not state adherence to these policy-producing bodies. Elsevier and Wiley-Blackwell had the most journals included (22.6% and 14.8%, respectively), with Wiley journals having greater a prevalence of misconduct definition and policies on falsification, fabrication and expression of concern and Elsevier of plagiarism-checking services.
Conclusions
Only a third of top-ranking peer-reviewed journals had publicly-available definitions of misconduct and less than a half described procedures for handling allegations of misconduct. As endorsement of international policies from policy-producing bodies was positively associated with implementation of policies and procedures, journals and their publishers should standardize their policies globally in order to increase public trust in the integrity of the published record in biomedicine.
doi:10.1371/journal.pone.0051928
PMCID: PMC3526485  PMID: 23284820
9.  Surgical trials and trial registers: a cross-sectional study of randomized controlled trials published in journals requiring trial registration in the author instructions 
Trials  2013;14:407.
Background
Trial registration and the reporting of trial results are essential to increase transparency in clinical research. Although both have been strongly promoted in recent years, it remains unclear whether they have been successfully implemented in surgery and surgery-related disciplines. In this cross-sectional study, we assessed whether randomized controlled trials (RCTs) published in surgery journals requiring trial registration in their author instructions were indeed registered, and whether the study results of registered RCTs had been submitted to the trial register and were thus publicly available.
Methods
The ten highest ranked surgery journals requiring trial registration by impact factor (Journal Citation Reports, JCR, 2011) were chosen. We then searched MEDLINE (in PubMed) for RCTs published in the selected journals between 1 June 2012 and 31 December 2012. Any trials recruiting participants before 2004 were excluded because the International Committee of Medical Journal Editors (ICMJE) first proposed trial registration in 2004. We then searched the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) to assess whether the identified RCTs were indeed registered and whether the results of the registered RCTs were available in the register.
Results
The search retrieved 588 citations. Four hundred and sixty references were excluded in the first screening. A further 25 were excluded after full-text screening. A total of 103 RCTs were finally included. Eighty-five of these RCTs (83%) could be found via the ICTRP. For 7 of 59 (12%) RCTs, which were registered on ClinicalTrials.gov, summary study data had been posted in the results database.
Conclusions
Although still not fully implemented, trial registration in surgery has gained momentum. In general, however, the submission of summary study data to ClinicalTrials.gov remains poor.
doi:10.1186/1745-6215-14-407
PMCID: PMC4220812  PMID: 24289719
Trial registration; Randomized controlled trials; Surgery journals; Results reporting
10.  Relationship between Quality and Editorial Leadership of Biomedical Research Journals: A Comparative Study of Italian and UK Journals 
PLoS ONE  2008;3(7):e2512.
Background
The quality of biomedical reporting is guided by statements of several organizations. Although not all journals adhere to these guidelines, those that do demonstrate “editorial leadership” in their author community. To investigate a possible relationship between editorial leadership and journal quality, research journals from two European countries, one Anglophone and one non-Anglophone, were studied and compared. Quality was measured on a panel of bibliometric parameters while editorial leadership was evaluated from journals' instructions to authors.
Methodology/Principal Findings
The study considered all 76 Italian journals indexed in Medline and 76 randomly chosen UK journals; only journals both edited and published in these countries were studied. Compared to UK journals, Italian journals published fewer papers (median, 60 vs. 93; p = 0.006), less often had online archives (43 vs. 74; p<0.001) and had lower median values of impact factor (1.2 vs. 2.7, p<0.001) and SCImago journal rank (0.09 vs. 0.25, p<0.001). Regarding editorial leadership, Italian journals less frequently required manuscripts to specify competing interests (p<0.001), authors' contributions (p = 0.005), funding (p<0.001), informed consent (p<0.001), ethics committee review (p<0.001). No Italian journal adhered to COPE or the CONSORT and QUOROM statements nor required clinical trial registration, while these characteristics were observed in 15%–43% of UK journals (p<0.001). At multiple regression, editorial leadership predicted 37.1%–49.9% of the variance in journal quality defined by citation statistics (p<0.0001); confounding variables inherent to a cross-cultural comparison had a relatively small contribution, explaining an additional 6.2%–13.8% of the variance.
Conclusions/Significance
Journals from Italy scored worse for quality and editorial leadership than did their UK counterparts. Editorial leadership predicted quality for the entire set of journals. Greater appreciation of international initiatives to improve biomedical reporting may help low-quality journals achieve higher status.
doi:10.1371/journal.pone.0002512
PMCID: PMC2438474  PMID: 18596938
11.  Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data 
PLoS Medicine  2013;10(10):e1001526.
Beate Wieseler and colleagues compare the completeness of reporting of patient-relevant clinical trial outcomes between clinical study reports and publicly available data.
Please see later in the article for the Editors' Summary
Background
Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments.
Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs.
Methods and Findings
We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as “completely reported” or “incompletely reported.” For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall.
We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes.
The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs.
Conclusions
In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health care professionals will ensure that they get the best available treatment. In the past, clinicians used their own experience to make decisions about which treatments to offer their patients, but nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical trials, studies that investigate the benefits and harms of drugs and other medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results of clinical research are available for evaluation. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Both types of bias pose a substantial threat to informed medical decision-making.
Why Was This Study Done?
Recent initiatives, such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a precondition for publication in medical journals, aim to prevent these biases but are imperfect. Another way to facilitate the unbiased evaluation of clinical research might be to increase access to clinical study reports (CSRs)—detailed but generally unpublished accounts of clinical trials. Notably, information from CSRs was recently used to challenge conclusions based on published evidence about the efficacy and safety of the antiviral drug oseltamivir and the antidepressant reboxetine. In this study, the researchers compare the information available in CSRs and in publicly available sources (journal publications and registry reports) for the patient-relevant outcomes included in 16 health technology assessments (HTAs; analyses of the medical implications of the use of specific medical technologies) for drugs; the HTAs were prepared by the Institute for Quality and Efficiency in Health Care (IQWiG), Germany's main HTA agency.
What Did the Researchers Do and Find?
The researchers searched for published journal articles and registry reports for each of 101 trials for which the IQWiG had requested and received full CSRs from drug manufacturers during HTA preparation. They then assessed the completeness of information on the patient-relevant benefit and harm outcomes (for example symptom relief and adverse effects, respectively) included in each document type. Eighty-six of the included trials had at least one publicly available data source; the results of 15% of the trials were not available in either journals or registry reports. Overall, the CSRs provided complete information on 86% of the patient-related outcomes, whereas the combined publicly available sources provided complete information on only 39% of the outcomes. For individual outcomes, the CSRs provided complete information on 78%–100% of the benefit outcomes, with the exception of health-related quality of life (57%); combined publicly available sources provided complete information on 20%–53% of these outcomes. The CSRs also provided more information on patient-relevant harm outcomes than the publicly available sources.
What Do These Findings Mean?
These findings show that, for the clinical trials considered here, publicly available sources provide much less information on patient-relevant outcomes than CSRs. The generalizability of these findings may be limited, however, because the trials included in this study are not representative of all trials. Specifically, only CSRs that were voluntarily provided by drug companies were assessed, a limited number of therapeutic areas were covered by the trials, and the trials investigated only drugs. Nevertheless, these findings suggest that access to CSRs is important for the unbiased evaluation of clinical trials and for informed decision-making in health care. Notably, in June 2013, the European Medicines Agency released a draft policy calling for the proactive publication of complete clinical trial data (possibly including CSRs). In addition, the European Union and the European Commission are considering legal measures to improve the transparency of clinical trial data. Both these initiatives will probably only apply to drugs that are approved after January 2014, however, and not to drugs already in use. The researchers therefore call for CSRs to be made publicly available for both past and future trials, a recommendation also supported by the AllTrials initiative, which is campaigning for all clinical trials to be registered and fully reported.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001526.
Wikipedia has pages on evidence-based medicine, publication bias, and health technology assessment (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union, and guidance on the preparation of clinical study reports; its draft policy on the release of data from clinical trials is available
Information about IQWiG is available (in English and German); Informed Health Online is a website provided by IQWiG that provides objective, independent, and evidence-based information for patients (also in English and German)
doi:10.1371/journal.pmed.1001526
PMCID: PMC3793003  PMID: 24115912
12.  Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals 
PLoS ONE  2013;8(6):e67111.
Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.
doi:10.1371/journal.pone.0067111
PMCID: PMC3689732  PMID: 23805293
13.  Evaluating adherence to the International Committee of Medical Journal Editors’ policy of mandatory, timely clinical trial registration 
Objective
To determine whether two specific criteria in Uniform Requirements for Manuscripts (URM) created by the International Committee of Medical Journal Editors (ICMJE)—namely, including the trial ID registration within manuscripts and timely registration of trials, are being followed.
Materials and methods
Observational study using computerized analysis of publicly available Medline article data and clinical trial registry data. We analyzed a purposive set of five ICMJE founding journals looking at all trial articles published in those journals during 2010–2011, and data from the ClinicalTrials.gov (CTG) trial registry. We measured adherence to trial ID inclusion policy as the percentage of trial journal articles that contained a valid trial ID within the article (journal-based sample). Adherence to timely registration was measured as the percentage of trials that registered the trial before enrolling the first participant within a 60-day grace period. We also examined timely registration rates by year of all phase II and higher interventional trials in CTG (registry-based sample).
Results
To determine trial ID inclusion, we analyzed 698 clinical trial articles in five journals. A total of 95.8% (661/690) of trial journal articles included the trial ID. In 88.3% the trial-article link is stored within a structured Medline field. To evaluate timely registration, we analyzed trials referenced by 451 articles from the selected five journals. A total of 60% (272/451) of articles were registered in a timely manner with an improving trend for trials initiated in later years (eg, 89% of trials that began in 2008 were registered in a timely manner). In the registry-based sample, the timely registration rates ranged from 56% for trials registered in 2006 to 72% for trials registered in 2011.
Discussion
Adherence to URM requirements for registration and trial ID inclusion increases the utility of PubMed and links it in an important way to clinical trial repositories. This new integrated knowledge source can facilitate research prioritization, clinical guidelines creation, and precision medicine.
Conclusions
The five selected journals adhere well to the policy of mandatory trial registration and also outperform the registry in adherence to timely registration. ICMJE's URM policy represents a unique international mandate that may be providing a powerful incentive for sponsors and investigators to document clinical trials and trial result publications and thus fulfill important obligations to trial participants and society.
doi:10.1136/amiajnl-2012-001501
PMCID: PMC3715364  PMID: 23396544
clinical trials as topic/legislation; registries; cross-sectional analysis; Databases; publication policy; trial registration
14.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
15.  Open Access to the Scientific Journal Literature: Situation 2009 
PLoS ONE  2010;5(6):e11273.
Background
The Internet has recently made possible the free global availability of scientific journal articles. Open Access (OA) can occur either via OA scientific journals, or via authors posting manuscripts of articles published in subscription journals in open web repositories. So far there have been few systematic studies showing how big the extent of OA is, in particular studies covering all fields of science.
Methodology/Principal Findings
The proportion of peer reviewed scholarly journal articles, which are available openly in full text on the web, was studied using a random sample of 1837 titles and a web search engine. Of articles published in 2008, 8,5% were freely available at the publishers' sites. For an additional 11,9% free manuscript versions could be found using search engines, making the overall OA percentage 20,4%. Chemistry (13%) had the lowest overall share of OA, Earth Sciences (33%) the highest. In medicine, biochemistry and chemistry publishing in OA journals was more common. In all other fields author-posted manuscript copies dominated the picture.
Conclusions/Significance
The results show that OA already has a significant positive impact on the availability of the scientific journal literature and that there are big differences between scientific disciplines in the uptake. Due to the lack of awareness of OA-publishing among scientists in most fields outside physics, the results should be of general interest to all scholars. The results should also interest academic publishers, who need to take into account OA in their business strategies and copyright policies, as well as research funders, who like the NIH are starting to require OA availability of results from research projects they fund. The method and search tools developed also offer a good basis for more in-depth studies as well as longitudinal studies.
doi:10.1371/journal.pone.0011273
PMCID: PMC2890572  PMID: 20585653
16.  Reporting of interventions in randomised trials: an audit of journal Instructions to Authors 
Trials  2014;15:20.
Background
A complete description of the intervention in a published trial report is necessary for readers to be able to use the intervention, yet the completeness of intervention descriptions in trials is very poor. Low awareness of the issue by authors, reviewers, and editors is part of the cause and providing specific instructions about intervention reporting to authors and encouraging full sharing of intervention materials is important. We assessed the extent to which: 1) journals’ Instructions to Authors provide instructions about how interventions that have been evaluated in a randomised controlled trial (RCT) should be reported in the paper; and 2) journals offer the option of authors providing online supplementary materials.
Methods
We examined the web-based Instructions to Authors of 106 journals (the six leading general medical journals, 50 randomly selected journals from the National Library of Medicine’s Core Clinical Journals, and 50 randomly selected journals from the remainder of the journal collection indexed by PubMed). To be eligible, each journal must have published at least one randomised trial involving human participants each year from 2008 to 2012. We extracted all information related to the reporting of interventions, reporting of randomised trials in general, and online supplementary materials.
Results
Of the 106 journals’ Instructions to Authors, only 15 (14%) specifically mentioned the reporting of interventions and most of these provided non-specific advice such as ‘describe essential features’. Just over half (62, 58%) of the journals mentioned the Consolidated Standards of Reporting Trials (CONSORT) statement in their author instructions. Seventy-eight (74%) of the journals’ instructions mentioned the option of providing supplementary content online as part of the paper; however, only four of these journals explicitly encouraged or mandated use of this option for providing intervention information or materials.
Conclusions
Most journals’ Instructions to Authors do not provide any specific instructions regarding reporting of interventions or encourage authors to provide online supplementary materials to enhance intervention reporting. Journals can help to improve the problem of incomplete intervention reporting by providing specific instructions to authors and peer reviewers about intervention reporting and requiring full intervention descriptions to be provided.
doi:10.1186/1745-6215-15-20
PMCID: PMC3896798  PMID: 24422788
Intervention reporting; Randomised controlled trial reporting; CONSORT
17.  Do urology journals enforce trial registration? A cross-sectional study of published trials 
BMJ Open  2011;1(2):e000430.
Objectives
(1) To assess endorsement of trial registration in author instructions of urology-related journals and (2) to assess whether randomised controlled trials (RCTs) in the field of urology were effectively registered.
Design
Cross-sectional study of author instructions and published trials.
Setting
Journals publishing in the field of urology.
Participants
First, the authors analysed author instructions of 55 urology-related journals indexed in ‘Journal Citation Reports 2009’ (12/2010). The authors divided these journals in two groups: those requiring and those not mentioning trial registration as a precondition for publication. Second, the authors chose the five journals with the highest impact factor (IF) from each group.
Intervention
MEDLINE search to identify RCTs published in these 10 journals in 2009 (01/2011); search of the clinical trials meta-search interface of WHO (International Clinical Trials Registry Platform) for RCTs that lacked information about registration (01–03/2011). Two authors independently assessed the information.
Outcome measures
Proportion of journals providing advice about trial registration and proportion of trials registered.
Results
Of 55 journals analysed, 26 (47.3%) provided some editorial advice about trial registration. Journals with higher IFs were more likely to mention trial registration explicitly (p=0.015). Of 106 RCTs published in 2009, 63 were registered (59.4%) with a tendency to an increase after 2005 (83.3%, p=0.035). 71.4% (30/42) of the RCTs that were published in journals mentioning and requiring registration, and 51.6% (33/64) of the RCTs that were published in journals that did not mention trial registration explicitly were registered. This difference was statistically significant (p=0.04).
Conclusions
The existence of a statement about trial registration in author instructions resulted in a higher proportion of registered RCTs in those journals. Journals with higher IFs were more likely to mention trial registration.
Article summary
Article focus
Trial registration can increase scientific transparency, but its implementation in specialty fields such as urology is unclear.
To assess the endorsement of trial registration in the author instructions of urology-related journals.
To assess whether randomised controlled trials in the field were effectively registered.
Key messages
A statement of trial registration in author instructions resulted in a higher proportion of registered randomised controlled trials.
Journals with high impact factors were more likely to mention trial registration.
We suggest, though, that ensuring trial registration is not the responsibility only of the editors. Medical scientists should realise that trial registration is necessary to contribute to transparency in research.
Strength and limitations of this study
Two authors independently assessed information regarding editorial advice about trial registration and identified the randomised controlled trials.
Potential bias occurred if registered randomised controlled trials were reported without giving a registration number and we could not identify them in the meta-search interface of WHO (International Clinical Trials Registry Platform).
Results might not be representative of the uro-nephrological field as a whole and reported figures may overestimate compliance with trial registration.
doi:10.1136/bmjopen-2011-000430
PMCID: PMC3236819  PMID: 22146890
18.  Endorsement of the CONSORT Statement by High-Impact Medical Journals in China: A Survey of Instructions for Authors and Published Papers 
PLoS ONE  2012;7(2):e30683.
Background
The CONSORT Statement is a reporting guideline for authors when reporting randomized controlled trials (RCTs). It offers a standard way for authors to prepare RCT reports. It has been endorsed by many high-impact medical journals and by international editorial groups. This study was conducted to assess the endorsement of the CONSORT Statement by high-impact medical journals in China by reviewing their instructions for authors.
Methodology/Principal Findings
A total of 200 medical journals were selected according to the Chinese Science and Technology Journal Citation Reports, 195 of which publish clinical research papers. Their instructions for authors were reviewed and all texts mentioning the CONSORT Statement or CONSORT extension papers were extracted. Any mention of the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (URM) developed by the International Committee of Medical Journal Editors (ICMJE) or ‘clinical trial registration’ was also extracted. For journals endorsing the CONSORT Statement, their most recently published RCT reports were retrieved and evaluated to assess whether the journals have followed what the CONSORT Statement required. Out of the 195 medical journals publishing clinical research papers, only six (6/195, 3.08%) mentioned ‘CONSORT’ in their instructions for authors; out of the 200 medical journals surveyed, only 14 (14/200, 7.00%) mentioned ‘ICMJE’ or ‘URM’ in their instructions for authors, and another five journals stated in their instructions for authors that clinical trials should have trial registration numbers and that priority would be given to clinical trials which had been registered. Among the 62 RCT reports published in the six journals endorsing the CONSORT Statement, 20 (20/62, 32.26%) contained flow diagrams and only three (3/62, 4.84%) provided trial registration information.
Conclusions/Significance
Medical journals in China endorsing either the CONSORT Statement or the ICMJE's URM constituted a small percentage of the total; all of these journals used ambiguous language regarding what was expected of authors.
doi:10.1371/journal.pone.0030683
PMCID: PMC3278410  PMID: 22348017
19.  Recall and bias of retrieving gene expression microarray datasets through PubMed identifiers 
Background
The ability to locate publicly available gene expression microarray datasets effectively and efficiently facilitates the reuse of these potentially valuable resources. Centralized biomedical databases allow users to query dataset metadata descriptions, but these annotations are often too sparse and diverse to allow complex and accurate queries. In this study we examined the ability of PubMed article identifiers to locate publicly available gene expression microarray datasets, and investigated whether the retrieved datasets were representative of publicly available datasets found through statements of data sharing in the associated research articles.
Results
In a recent article, Ochsner and colleagues identified 397 studies that had generated gene expression microarray data. Their search of the full text of each publication for statements of data sharing revealed 203 publicly available datasets, including 179 in the Gene Expression Omnibus (GEO) or ArrayExpress databases. Our scripted search of GEO and ArrayExpress for PubMed identifiers of the same 397 studies returned 160 datasets, including six not found by the original search for data sharing statements. As a proportion of datasets found by either method, the search for data sharing statements identified 91.4% of the 209 publicly available datasets, compared to 76.6% found by our search for PubMed identifiers. Searching GEO or ArrayExpress alone retrieved 63.2% and 46.9% of all available datasets, respectively. Studies retrieved through PubMed identifiers were representative of all datasets in terms of research theme, technology, size, and impact, though the recall was highest for datasets published by the highest-impact journals.
Conclusions
Searching database entries using PubMed identifiers can identify the majority of publicly available datasets. We urge authors of all datasets to complete the citation fields for their dataset submissions once publication details are known, thereby ensuring their work has maximum visibility and can contribute to subsequent studies.
PMCID: PMC2990274  PMID: 20349403
information retrieval; data sharing; databases; bioinformatics; PubMed; gene expression microarrays
20.  Publication trends of shared decision making in 15 high impact medical journals: a full-text review with bibliometric analysis 
Background
Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals.
Methods
We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase “shared decision making” or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics.
Results
We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively).
Conclusion
This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
doi:10.1186/1472-6947-14-71
PMCID: PMC4136407  PMID: 25106844
Shared decision making; Bibliometric analysis; Decision making; Full text search; Review; Information storage and retrieval; PubMed; Text mining
21.  Insights into the Management of Emerging Infections: Regulating Variant Creutzfeldt-Jakob Disease Transfusion Risk in the UK and the US 
PLoS Medicine  2006;3(10):e342.
Background
Variant Creutzfeldt-Jakob disease (vCJD) is a human prion disease caused by infection with the agent of bovine spongiform encephalopathy. After the recognition of vCJD in the UK in 1996, many nations implemented policies intended to reduce the hypothetical risk of transfusion transmission of vCJD. This was despite the fact that no cases of transfusion transmission had yet been identified. In December 2003, however, the first case of vCJD in a recipient of blood from a vCJD-infected donor was announced. The aim of this study is to ascertain and compare the factors that influenced the motivation for and the design of regulations to prevent transfusion transmission of vCJD in the UK and US prior to the recognition of this case.
Methods and Findings
A document search was conducted to identify US and UK governmental policy statements and guidance, transcripts (or minutes when transcripts were not available) of scientific advisory committee meetings, research articles, and editorials published in medical and scientific journals on the topic of vCJD and blood transfusion transmission between March 1996 and December 2003. In addition, 40 interviews were conducted with individuals familiar with the decision-making process and/or the science involved. All documents and transcripts were coded and analyzed according to the methods and principles of grounded theory. Data showed that while resulting policies were based on the available science, social and historical factors played a major role in the motivation for and the design of regulations to protect against transfusion transmission of vCJD. First, recent experience with and collective guilt resulting from the transfusion-transmitted epidemics of HIV/AIDS in both countries served as a major, historically specific impetus for such policies. This history was brought to bear both by hemophilia activists and those charged with regulating blood products in the US and UK. Second, local specificities, such as the recall of blood products for possible vCJD contamination in the UK, contributed to a greater sense of urgency and a speedier implementation of regulations in that country. Third, while the results of scientific studies played a prominent role in the construction of regulations in both nations, this role was shaped by existing social and professional networks. In the UK, early focus on a European study implicating B-lymphocytes as the carrier of prion infectivity in blood led to the introduction of a policy that requires universal leukoreduction of blood components. In the US, early focus on an American study highlighting the ability of plasma to serve as a reservoir of prion infectivity led the FDA and its advisory panel to eschew similar measures.
Conclusions
The results of this study yield three important theoretical insights that pertain to the global management of emerging infectious diseases. First, because the perception and management of disease may be shaped by previous experience with disease, especially catastrophic experience, there is always the possibility for over-management of some possible routes of transmission and relative neglect of others. Second, local specificities within a given nation may influence the temporality of decision making, which in turn may influence the choice of disease management policies. Third, a preference for science-based risk management among nations will not necessarily lead to homogeneous policies. This is because the exposure to and interpretation of scientific results depends on the existing social and professional networks within a given nation. Together, these theoretical insights provide a framework for analyzing and anticipating potential conflicts in the international management of emerging infectious diseases. In addition, this study illustrates the utility of qualitative methods in investigating research questions that are difficult to assess through quantitative means.
A qualitative study of US and UK governmental policy statements on the topic of vCJD and blood transfusion transmission identified factors responsible for differences in the policies adopted.
Editors' Summary
Background.
In 1996 in the UK, a new type of human prion disease was seen for the first time. This is now known as variant Creutzfeldt-Jakob disease (vCJD). Prion diseases are rare brain diseases passed from individual to individual (or between animals) by a particular type of wrongly folded protein, and they are fatal. It was suspected that vCJD had passed to humans from cattle, and that the agent causing vCJD was the same as that causing bovine spongiform encephalopathy (or “mad cow disease”). Shortly after vCJD was recognized, authorities in many countries became concerned about the possibility that it could be transmitted from one person to another through contaminated blood supplies used for transfusion in hospitals. Even though there wasn't any evidence of actual transmission of the disease through blood before December 2003, authorities in the UK, US, and elsewhere set up regulations designed to reduce the chance of that happening. At this early stage in the epidemic, there was little in the way of scientific information about the transmission properties of the disease. Both the UK and US, however, sought to make decisions in a scientific manner. They made use of evidence as it was being produced, often before it had been published. Despite this, the UK and US decided on very different changes to their respective regulations on blood donation. Both countries chose to prevent certain people (who they thought would be at greater risk of having vCJD) from donating blood. In the UK, however, the decision was made to remove white blood cells from donated blood to reduce the risk of transmitting vCJD, while the US decided that such a step was not merited by the evidence.
Why Was This Study Done?
This researcher wanted to understand more clearly why the UK and US ended up with different policies: what role was played by science, and what role was played by non-scientific factors? She hoped that insights from this investigation would also be relevant to similar challenges in the future—for example, as many countries try to work out how to control the threat of avian flu.
What Did the Researcher Do and Find?
The researcher searched for all relevant official government documents from the US and UK, as well as scientific papers, published between the time vCJD was first identified (March 1996) and the first instance of vCJD carried through blood (December 2003). She also interviewed people who knew about vCJD management in the US and UK—for example, members of government agencies and the relevant advisory committees. From the documents and interviews, the researcher picked out and grouped shared ideas. Although these documents and interviews suggested that policy making was rooted in scientific evidence, many non-scientific factors were also important. The researcher found substantial uncertainty in the scientific evidence available at the time. The document search and interviews showed that policy makers felt guilty about a previous experience in which people had become infected with HIV/AIDS through contaminated blood and were concerned about repeating this experience. Finally, in the UK, the possibility of blood contamination was seen as a much more urgent problem than in the US, because BSE and vCJD were found there first and there were far more cases. This meant that when the UK made its decision about whether to remove white blood cells from donated blood, there was less scientific evidence available. In fact, the main study that was relied on at the time would later be questioned.
What Do These Findings Mean?
These findings show that for this particular case, science was not the only factor affecting government policies. Historical and social factors such as previous experience, sense of urgency, public pressure, and the relative importance of different scientific networks were also very important. The study predicts that in the future, infectious disease–related policy decisions are unlikely to be the same across different countries because the interpretation of scientific evidence depends, to a large extent, on social factors.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030342.
National Creutzfeldt-Jakob Disease Surveillance Unit, Edinburgh, UK
US Centers for Disease Control and Prevention pages about prion diseases
World Health Organization variant Creutzfeldt-Jakob disease fact sheet
US National Institute of Neurological Disorders and Stroke information about prion diseases
doi:10.1371/journal.pmed.0030342
PMCID: PMC1621089  PMID: 17076547
22.  Article processing charges, funding, and open access publishing at Journal of Experimental & Clinical Assisted Reproduction 
Journal of Experimental & Clinical Assisted Reproduction is an Open Access, online, electronic journal published by BioMed Central with full contents available to the scientific and medical community free of charge to all readers. Authors maintain the copyright to their own work, a policy facilitating dissemination of data to the widest possible audience without requiring permission from the publisher. This Open Access publishing model is subsidized by authors (or their institutions/funding agencies) in the form of a single £330 article processing charge (APC), due at the time of manuscript acceptance for publication. Payment of the APC is not a condition for formal peer review and does not apply to articles rejected after review. Additionally, this fee is waived for authors whose institutions are BioMed Central members or where genuine financial hardship exists. Considering ordinary publication fees related to page charges and reprints, the APC at Journal of Experimental & Clinical Assisted Reproduction is comparable to costs associated with publishing in some traditional print journals, and is less expensive than many. Implementation of the APC within this Open Access framework is envisioned as a modern research-friendly policy that supports networking among investigators, brings new research into reach rapidly, and empowers authors with greater control over their own scholarly publications.
doi:10.1186/1743-1050-2-1
PMCID: PMC546227  PMID: 15649322
23.  Emerging ethical issues in instructions to authors of high-impact biomedical journals 
Public interest in issues concerning the maintenance of high ethical standards in the conduct of scientific research and its publication has been increasing. Some of the developments in these issues as reflected in the publication of the medical literature are traced here. This paper attempts to determine whether public interest is reflected in the specific requirements for authors for manuscript preparation as stated in the “Instructions to Authors” for articles being prepared for submission to 124 “high- impact” journals. The instructions to authors of these journals were read on the Web for references to ethical standards or requirements. The ethical issues that the instructions most often covered were specifically related to the individual journal's publication requirements. The results suggest that while the editors and publishers of the biomedical literature are concerned with promoting and protecting the rights of the subjects of the experiments in the articles they publish, and while these concerns are not yet paramount, they are evolving and growing.
PMCID: PMC209510  PMID: 14566375
24.  Is Qualitative Research Second Class Science? A Quantitative Longitudinal Examination of Qualitative Research in Medical Journals 
PLoS ONE  2011;6(2):e16937.
Background
Qualitative research appears to be gaining acceptability in medical journals. Yet, little is actually known about the proportion of qualitative research and factors affecting its publication. This study describes the proportion of qualitative research over a 10 year period and correlates associated with its publication.
Design
A quantitative longitudinal examination of the proportion of original qualitative research in 67 journals of general medicine during a 10 year period (1998–2007). The proportion of qualitative research was determined by dividing original qualitative studies published (numerator) by all original research articles published (denominator). We used a generalized estimating equations approach to assess the longitudinal association between the proportion of qualitative studies and independent variables (i.e. journals' country of publication and impact factor; editorial/methodological papers discussing qualitative research; and specific journal guidelines pertaining to qualitative research).
Findings
A 2.9% absolute increase and 3.4-fold relative increase in qualitative research publications occurred over a 10 year period (1.2% in 1998 vs. 4.1% in 2007). The proportion of original qualitative research was independently and significantly associated with the publication of editorial/methodological papers in the journal (b = 3.688, P = 0.012); and with qualitative research specifically mentioned in guidelines for authors (b = 6.847, P<0.001). Additionally, a higher proportion of qualitative research was associated only with journals published in the UK in comparison to other countries, yet with borderline statistical significance (b = 1.776, P = 0.075). The journals' impact factor was not associated with the publication of qualitative research.
Conclusions
Despite an increase in the proportion of qualitative research in medical journals over a 10 year period, the proportion remains low. Journals' policies pertaining to qualitative research, as expressed by the appearance of specific guidelines and editorials/methodological papers on the subject, are independently associated with the publication of original qualitative research; irrespective of the journals' impact factor.
doi:10.1371/journal.pone.0016937
PMCID: PMC3044713  PMID: 21383987
25.  Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin 
PLoS Medicine  2013;10(1):e1001378.
Using documents obtained through litigation, S. Swaroop Vedula and colleagues compared internal company documents regarding industry-sponsored trials of off-label uses of gabapentin with the published trial reports and find discrepancies in reporting of analyses.
Background
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Conclusions
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
To be credible, published research must present an unbiased, transparent, and accurate description of the study methods and findings so that readers can assess all relevant information to make informed decisions about the impact of any conclusions. Therefore, research publications should conform to universally adopted guidelines and checklists. Studies to establish whether a treatment is effective, termed randomized controlled trials (RCTs), are checked against a comprehensive set of guidelines: The robustness of trial protocols are measured through the Standard Protocol Items for Randomized Trials (SPIRIT), and the Consolidated Standards of Reporting Trials (CONSORT) statement (which was constructed and agreed by a meeting of journal editors in 1996, and has been updated over the years) includes a 25-point checklist that covers all of the key points in reporting RCTs.
Why Was This Study Done?
Although the CONSORT statement has helped improve transparency in the reporting of the methods and findings from RCTs, the statement does not define how certain types of analyses should be conducted and which patients should be included in the analyses, for example, in an intention-to-treat analysis (in which all participants are included in the data analysis of the group to which they were assigned, whether or not they completed the intervention given to the group). So in this study, the researchers used internal company documents released in the course of litigation against the pharmaceutical company Pfizer regarding the drug gabapentin, to compare between the internal and published reports the reporting of the numbers of participants, the description of the types of analyses, and the definitions of each type of analysis. The reports involved studies of gabapentin used for medical reasons not approved for marketing by the US Food and Drug Administration, known as “off-label” uses.
What Did the Researchers Do and Find?
The researchers identified trials sponsored by Pfizer relating to four off-label uses of gabapentin and examined the internal company protocols, statistical analysis plans, research reports, and the main publications related to each trial. The researchers then compared the numbers of participants randomized and analyzed for the main (primary) outcome and the type of analysis for efficacy and safety in both the internal research report and the trial publication. The researchers identified 21 trials, 11 of which were published RCTs that had the associated documents necessary for comparison.
The researchers found that in three out of ten trials there were differences in the internal research report and the main publication regarding the number of randomized participants. Furthermore, in six out of ten trials, the researchers were unable to compare the internal research report with the main publication for the number of participants analyzed for efficacy, because the research report either did not describe the primary outcome or did not describe the type of analysis. Overall, the researchers found that seven different types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including intention-to-treat analysis. However, the protocol or publication used six different descriptions for the intention-to-treat analysis, resulting in several important differences between the internal and published documents about the number of patients included in the analysis.
What Do These Findings Mean?
These findings from a sample of industry-sponsored trials on the off-label use of gabapentin suggest that when compared to the internal research reports, the trial publications did not always accurately reflect what was actually done in the trial. Therefore, the trial publication could not be considered to be an accurate and transparent record of the numbers of participants randomized and analyzed for efficacy. These findings support the need for further revisions of the CONSORT statement, such as including explicit statements about the criteria used to define each type of analysis and the numbers of participants excluded from each type of analysis. Further guidance is also needed to ensure consistent terminology for types of analysis. Of course, these revisions will improve reporting only if authors and journals adhere to them. These findings also highlight the need for all individual patient data to be made accessible to readers of the published article.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001378.
For more information, see the CONSORT statement website
The EQUATOR Network website is a resource center for the good reporting of health research studies and has more information about the SPIRIT initiative and the CONSORT statement
doi:10.1371/journal.pmed.1001378
PMCID: PMC3558476  PMID: 23382656

Results 1-25 (1150413)