PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (488336)

Clipboard (0)
None

Related Articles

1.  Funding free and universal access to Journal of Neuroinflammation 
Journal of Neuroinflammation is an Open Access, online journal published by BioMed Central. Open Access publishing provides instant and universal availability of published work to any potential reader, worldwide, completely free of subscriptions, passwords, and charges. Further, authors retain copyright for their work, facilitating its dissemination. Open Access publishing is made possible by article-processing charges assessed "on the front end" to authors, their institutions, or their funding agencies. Beginning November 1, 2004, the Journal of Neuroinflammation will introduce article-processing charges of around US$525 for accepted articles. This charge will be waived for authors from institutions that are BioMed Central members, and in additional cases for reasons of genuine financial hardship. These article-processing charges pay for an electronic submission process that facilitates efficient and thorough peer review, for publication costs involved in providing the article freely and universally accessible in various formats online, and for the processes required for the article's inclusion in PubMed and its archiving in PubMed Central, e-Depot, Potsdam and INIST. There is no remuneration of any kind provided to the Editors-in-Chief, to any members of the Editorial Board, or to peer reviewers; all of whose work is entirely voluntary. Our article-processing charge is less than charges frequently levied by traditional journals: the Journal of Neuroinflammation does not levy any additional page or color charges on top of this fee, and there are no reprint costs as publication-quality pdf files are provided, free, for distribution in lieu of reprints. Our article-processing charge will enable full, immediate, and continued Open Access for all work published in Journal of Neuroinflammation. The benefits from such Open Access will accrue to readers, through unrestricted access; to authors, through the widest possible dissemination of their work; and to science and society in general, through facilitation of information availability and scientific advancement.
doi:10.1186/1742-2094-1-19
PMCID: PMC528856  PMID: 15485579
2.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
3.  Journal publications by Australian chiropractic academics: are they enough? 
Purpose
To document the number of journal publications attributed to the academic faculty of Australian chiropractic tertiary institutions. To provide a discussion of the significance of this output and to relate this to the difficulty the profession appears to be experiencing in the uptake of evidence based healthcare outcomes and cultures.
Methods
The departmental websites for the three Australian chiropractic tertiary institutions were accessed and a list of academic faculty compiled. It was noted whether each academic held a chiropractic qualification or research Doctoral (not professional) degree qualification A review of the literature was conducted using the names of the academics and cross-referencing to publications listed independently in the PubMed and Index to Chiropractic Literature (ICL) databases (from inception to February 27 2006). Publications were excluded that were duplicates, corrected reprints, conference abstracts/proceedings, books, monographs, letters to the editor/comments or editorials. Using this information an annual and recent publication rate was constructed.
Results
For the 41 academics there was a total of 155 PubMed listed publications (mean 3.8, annual rate per academic 0.31) and 415 ICL listed publications (mean 10.1, annual rate 0.62). Over the last five years there have been 50 PubMed listed publications (mean 1.2, annual rate 0.24) and 97 ICL listed publications (mean 2.4, annual rate 0.47). Chiropractor academics (n = 31) had 29 PubMed listed publications (mean 2.5, annual rate 0.27) and 265 ICL listed publications (mean 8.5, annual rate 0.57). Academics with a doctoral degree (n = 13) had 134 PubMed listed publications (mean 10.3, annual rate 0.70) and 311 ICL listed publications (mean 23.9, annual rate 1.44). Academics without a Doctoral degree (n = 28) had 21 PubMed listed publications (mean 0.8, annual rate 0.13) and 104 ICL listed publications (mean 3.7, annual rate 0.24).
Conclusion
While several academics have compiled an impressive list of publications, overall there is a significant paucity of published research authored by the majority of academics, with a trend for a falling recent publication rate and not having a doctoral degree being a risk factor for poor publication productivity. It is suggested that there is an urgent necessity to facilitate the acquisition of research skills in academic staff particularly in research methods and publication skills. Only when undergraduate students are exposed to an institutional environment conducive to and fostering research will concepts of evidence based healthcare really be appreciated and implemented by the profession.
doi:10.1186/1746-1340-14-13
PMCID: PMC1559708  PMID: 16872544
4.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
5.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
6.  Public accessibility of biomedical articles from PubMed Central reduces journal readership—retrospective cohort analysis 
The FASEB Journal  2013;27(7):2536-2541.
Does PubMed Central—a government-run digital archive of biomedical articles—compete with scientific society journals? A longitudinal, retrospective cohort analysis of 13,223 articles (5999 treatment, 7224 control) published in 14 society-run biomedical research journals in nutrition, experimental biology, physiology, and radiology between February 2008 and January 2011 reveals a 21.4% reduction in full-text hypertext markup language (HTML) article downloads and a 13.8% reduction in portable document format (PDF) article downloads from the journals' websites when U.S. National Institutes of Health-sponsored articles (treatment) become freely available from the PubMed Central repository. In addition, the effect of PubMed Central on reducing PDF article downloads is increasing over time, growing at a rate of 1.6% per year. There was no longitudinal effect for full-text HTML downloads. While PubMed Central may be providing complementary access to readers traditionally underserved by scientific journals, the loss of article readership from the journal website may weaken the ability of the journal to build communities of interest around research papers, impede the communication of news and events to scientific society members and journal readers, and reduce the perceived value of the journal to institutional subscribers.—Davis, P. M. Public accessibility of biomedical articles from PubMed Central reduces journal readership—retrospective cohort analysis.
doi:10.1096/fj.13-229922
PMCID: PMC3688741  PMID: 23554455
digital repositories; downloads; open access; scientific publishing
7.  A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine 
Head & Face Medicine  2007;3:27.
Background
Head & Face Medicine (HFM) was launched in August 2005 to provide multidisciplinary science in the field of head and face disorders with an open access and open peer review publication platform. The objective of this study is to evaluate the characteristics of submissions, the effectiveness of open peer reviewing, and factors biasing the acceptance or rejection of submitted manuscripts.
Methods
A 1-year period of submissions and all concomitant journal operations were retrospectively analyzed. The analysis included submission rate, reviewer rate, acceptance rate, article type, and differences in duration for peer reviewing, final decision, publishing, and PubMed inclusion. Statistical analysis included Mann-Whitney U test, Chi-square test, regression analysis, and binary logistic regression.
Results
HFM received 126 articles (10.5 articles/month) for consideration in the first year. Submissions have been increasing, but not significantly over time. Peer reviewing was completed for 82 articles and resulted in an acceptance rate of 48.8%. In total, 431 peer reviewers were invited (5.3/manuscript), of which 40.4% agreed to review. The mean peer review time was 37.8 days. The mean time between submission and acceptance (including time for revision) was 95.9 days. Accepted papers were published on average 99.3 days after submission. The mean time between manuscript submission and PubMed inclusion was 101.3 days. The main article types submitted to HFM were original research, reviews, and case reports. The article type had no influence on rejection or acceptance. The variable 'number of invited reviewers' was the only significant (p < 0.05) predictor for rejection of manuscripts.
Conclusion
The positive trend in submissions confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers. None of the other parameters tested had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, is free of 'editorial bias' in accepting manuscripts.
Original data
Provided as a downloadable tab-delimited text file (URL and variable code available under section 'additional files').
doi:10.1186/1746-160X-3-27
PMCID: PMC1913501  PMID: 17562003
8.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
9.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
10.  Published Articles in PubMed-indexed Journals from Tabriz University of Medical Sciences Faculty of Dentistry 
Background and aims
This survey was conducted to provide statistical data regarding publications in PubMed-indexed journals from Tabriz University of Medical Sciences Faculty of Dentistry.
Materials and methods
The database used for this study was PubMed. The search was conducted using key words including the names of the heads of the departments. Papers published between January 1, 2005 and April 31, 2012 were considered. The retrieved abstracts were reviewed and unrelated articles were excluded. Data were transferred to Microsoft Excel software for descriptive statistical analyses.
Results
A total of 158 papers matched the inclusion criteria, with the majority from the Department of Endodontics (49 articles). The highest proportion (48.3%) of papers was related to in vitro studies, followed by clinical trials, in vivo studies, and case reports. The number of publications showed a considerable increase over the studied period.
Conclusion
PubMed-indexed publications from different departments have increased steadily, suggesting that research has become an essential component in the evaluated institute.
doi:10.5681/joddd.2012.033
PMCID: PMC3529932  PMID: 23277865
Dental; faculty; medical; scientific publication; university
11.  How Complementary and Alternative Medicine Practitioners Use PubMed 
Background
PubMed is the largest bibliographic index in the life sciences. It is freely available online and is used by professionals and the public to learn more about medical research. While primarily intended to serve researchers, PubMed provides an array of tools and services that can help a wider readership in the location, comprehension, evaluation, and utilization of medical research.
Objective
This study sought to establish the potential contributions made by a range of PubMed tools and services to the use of the database by complementary and alternative medicine practitioners.
Methods
In this study, 10 chiropractors, 7 registered massage therapists, and a homeopath (N = 18), 11 with prior research training and 7 without, were taken through a 2-hour introductory session with PubMed. The 10 PubMed tools and services considered in this study can be divided into three functions: (1) information retrieval (Boolean Search, Limits, Related Articles, Author Links, MeSH), (2) information access (Publisher Link, LinkOut, Bookshelf ), and (3) information management (History, Send To, Email Alert). Participants were introduced to between six and 10 of these tools and services. The participants were asked to provide feedback on the value of each tool or service in terms of their information needs, which was ranked as positive, positive with emphasis, negative, or indifferent.
Results
The participants in this study expressed an interest in the three types of PubMed tools and services (information retrieval, access, and management), with less well-regarded tools including MeSH Database and Bookshelf. In terms of their comprehension of the research, the tools and services led the participants to reflect on their understanding as well as their critical reading and use of the research. There was universal support among the participants for greater access to complete articles, beyond the approximately 15% that are currently open access. The abstracts provided by PubMed were felt to be necessary in selecting literature to read but entirely inadequate for both evaluating and learning from the research. Thus, the restrictions and fees the participants faced in accessing full-text articles were points of frustration.
Conclusions
The study found strong indications of PubMed’s potential value in the professional development of these complementary and alternative medicine practitioners in terms of engaging with and understanding research. It provides support for the various initiatives intended to increase access, including a recommendation that the National Library of Medicine tap into the published research that is being archived by authors in institutional archives and through other websites.
doi:10.2196/jmir.9.2.e19
PMCID: PMC1913941  PMID: 17613489
PubMed; research dissemination; complementary and alternative medicine; open access; professional development; information retrieval; information management; literacy
12.  Status of open access in the biomedical field in 2005*† 
Objectives:
This study was designed to document the state of open access (OA) in the biomedical field in 2005.
Methods:
PubMed was used to collect bibliographic data on target articles published in 2005. PubMed, Google Scholar, Google, and OAIster were then used to establish the availability of free full text online for these publications. Articles were analyzed by type of OA, country, type of article, impact factor, publisher, and publishing model to provide insight into the current state of OA.
Results:
Twenty-seven percent of all the articles were accessible as OA articles. More than 70% of the OA articles were provided through journal websites. Mid-rank commercial publishers often provided OA articles in OA journals, while society publishers tended to provide OA articles in the context of a traditional subscription model. The rate of OA articles available from the websites of individual authors or in institutional repositories was quite low.
Discussion/Conclusions:
In 2005, OA in the biomedical field was achieved under an umbrella of existing scholarly communication systems. Typically, OA articles were published as part of subscription journals published by scholarly societies. OA journals published by BioMed Central contributed to a small portion of all OA articles.
doi:10.3163/1536-5050.97.1.002
PMCID: PMC2605039  PMID: 19159007
13.  ScienceCentral: open access full-text archive of scientific journals based on Journal Article Tag Suite regardless of their languages 
Biochemia Medica  2013;23(3):235-236.
ScienceCentral, a free or open access, full-text archive of scientific journal literature at the Korean Federation of Science and Technology Societies, was under test in September 2013. Since it is a Journal Article Tag Suite-based full text database, extensible markup language files of all languages can be presented, according to Unicode Transformation Format 8-bit encoding. It is comparable to PubMed Central: however, there are two distinct differences. First, its scope comprises all science fields; second, it accepts all language journals. Launching ScienceCentral is the first step for free access or open access academic scientific journals of all languages to leap to the world, including scientific journals from Croatia.
doi:10.11613/BM.2013.029
PMCID: PMC3900078  PMID: 24266292
periodicals as topic; language; database; science
14.  Article processing charges, funding, and open access publishing at Journal of Experimental & Clinical Assisted Reproduction 
Journal of Experimental & Clinical Assisted Reproduction is an Open Access, online, electronic journal published by BioMed Central with full contents available to the scientific and medical community free of charge to all readers. Authors maintain the copyright to their own work, a policy facilitating dissemination of data to the widest possible audience without requiring permission from the publisher. This Open Access publishing model is subsidized by authors (or their institutions/funding agencies) in the form of a single £330 article processing charge (APC), due at the time of manuscript acceptance for publication. Payment of the APC is not a condition for formal peer review and does not apply to articles rejected after review. Additionally, this fee is waived for authors whose institutions are BioMed Central members or where genuine financial hardship exists. Considering ordinary publication fees related to page charges and reprints, the APC at Journal of Experimental & Clinical Assisted Reproduction is comparable to costs associated with publishing in some traditional print journals, and is less expensive than many. Implementation of the APC within this Open Access framework is envisioned as a modern research-friendly policy that supports networking among investigators, brings new research into reach rapidly, and empowers authors with greater control over their own scholarly publications.
doi:10.1186/1743-1050-2-1
PMCID: PMC546227  PMID: 15649322
15.  Launching the "Journal of Biomedical Discovery and Collaboration" 
The Journal of Biomedical Discovery and Collaboration was created to provide, for the first time, a unified forum to consider all factors that affect scientific practice and scientific discovery – with an emphasis on the changing face of contemporary biomedical science. In this endeavor we are bringing together three different groups of scholars: a) laboratory investigators, who make the discoveries that are the currency of the scientific enterprise; b) computer science and informatics investigators, who devise tools for data analysis, mining, visualization and integration; and c) social scientists, including sociologists, historians, and philosophers, who study scientific practice, collaboration, and information needs. We will publish original research articles, case studies, focus pieces, reviews, and software articles. All articles in the Journal of Biomedical Discovery and Collaboration will be peer reviewed, published immediately upon acceptance, freely available online via open access, and archived in PubMed Central and other international full-text repositories.
doi:10.1186/1747-5333-1-1
PMCID: PMC1440304
16.  The Journal of Inflammation 
Welcome to the Journal of Inflammation, the first open-access, peer-reviewed, online journal to focus on all aspects of the study of inflammation and inflammatory conditions. While research into inflammation has resulted in great progress in the latter half of the 20th century, the rate of progress is rapidly accelerating. Thus there is a need for a vehicle through which this very diverse research can be made readily available to the scientific community. The Journal of Inflammation, a peer reviewed journal, provides the ideal vehicle for such rapid dissemination of information. The Journal of Inflammation covers the full range of underlying cellular and molecular mechanisms involved, not only in the production of the inflammatory responses but, more importantly in clinical terms, in the healing process as well. This includes molecular, cellular, animal and clinical studies related to the study of inflammatory conditions and responses, and all related aspects of pharmacology, such as anti-inflammatory drug development, trials and therapeutic developments, etc. All articles published in the Journal of Inflammation are immediately listed in PubMed, and access to published articles is universal and free through the internet.
doi:10.1186/1476-9255-1-1
PMCID: PMC1074343  PMID: 15813979
17.  Selection in Reported Epidemiological Risks: An Empirical Assessment 
PLoS Medicine  2007;4(3):e79.
Background
Epidemiological studies may be subject to selective reporting, but empirical evidence thereof is limited. We empirically evaluated the extent of selection of significant results and large effect sizes in a large sample of recent articles.
Methods and Findings
We evaluated 389 articles of epidemiological studies that reported, in their respective abstracts, at least one relative risk for a continuous risk factor in contrasts based on median, tertile, quartile, or quintile categorizations. We examined the proportion and correlates of reporting statistically significant and nonsignificant results in the abstract and whether the magnitude of the relative risks presented (coined to be consistently ≥1.00) differs depending on the type of contrast used for the risk factor. In 342 articles (87.9%), ≥1 statistically significant relative risk was reported in the abstract, while only 169 articles (43.4%) reported ≥1 statistically nonsignificant relative risk in the abstract. Reporting of statistically significant results was more common with structured abstracts, and was less common in US-based studies and in cancer outcomes. Among 50 randomly selected articles in which the full text was examined, a median of nine (interquartile range 5–16) statistically significant and six (interquartile range 3–16) statistically nonsignificant relative risks were presented (p = 0.25). Paradoxically, the smallest presented relative risks were based on the contrasts of extreme quintiles; on average, the relative risk magnitude was 1.41-, 1.42-, and 1.36-fold larger in contrasts of extreme quartiles, extreme tertiles, and above-versus-below median values, respectively (p < 0.001).
Conclusions
Published epidemiological investigations almost universally highlight significant associations between risk factors and outcomes. For continuous risk factors, investigators selectively present contrasts between more extreme groups, when relative risks are inherently lower.
An evaluation of published articles reporting epidemiological studies found that they almost universally highlight significant associations between risk factors and outcomes.
Editors' Summary
Background.
Medical and scientific researchers use statistical tests to try to work out whether their observations—for example, seeing a difference in some characteristic between two groups of people—might have occurred as a result of chance alone. Statistical tests cannot determine this for sure, rather they can only give a probability that the observations would have arisen by chance. When researchers have many different hypotheses, and carry out many statistical tests on the same set of data, they run the risk of concluding that there are real differences where in fact there are none. At the same time, it has long been known that scientific and medical researchers tend to pick out the findings on which to report in their papers. Findings that are more interesting, impressive, or statistically significant are more likely to be published. This is termed “publication bias” or “selective reporting bias.” Therefore, some people are concerned that the published scientific literature might contain many false-positive findings, i.e., findings that are not true but are simply the result of chance variation in the data. This would have a serious impact on the accuracy of the published scientific literature and would tend to overestimate the strength and direction of relationships being studied.
Why Was This Study Done?
Selective reporting bias has already been studied in detail in the area of randomized trials (studies where participants are randomly allocated to receive an intervention, e.g., a new drug, versus an alternative intervention or “comparator,” in order to understand the benefits or safety of the new intervention). These studies have shown that very many of the findings of trials are never published, and that statistically significant findings are more likely to be included in published papers than nonsignificant findings. However, much medical research is carried out that does not use randomized trial methods, either because that method is not useful to answer the question at hand or is unethical. Epidemiological research is often concerned with looking at links between risk factors and the development of disease, and this type of research would generally use observation rather than experiment to uncover connections. The researchers here were concerned that selective reporting bias might be just as much of a problem in epidemiological research as in randomized trials research, and wanted to study this specifically.
What Did the Researchers Do and Find?
In this investigation, searches were carried out of PubMed, a database of biomedical research studies, to extract epidemiological studies that were published between January 2004 and October 2005. The researchers wanted to specifically look at studies reporting the effect of continuous risk factors and their effect on health or disease outcomes (a continuous risk factor is something like age or glucose concentration in the blood, is a number, and can have any value on a sliding scale). Three hundred and eighty-nine original research studies were found, and the researchers pulled out from the abstracts and full text of these papers the relative risks that were reported along with the results of statistical tests for them. (Relative risk is the chance of getting an outcome, say disease, in one group as compared to another group.) The researchers found that nearly 90% of these studies had one or more statistically significant risks reported in the abstract, but only 43% reported one or more risks that were not statistically significant. When looking at all of the findings reported anywhere in the full text for 50 of these studies, the researchers saw that papers overall reported more statistically significant risks than nonsignificant risks. Finally, it seemed that in the set of papers studied here, the way in which statistical analyses were done produced a bias towards more extreme findings: for datasets showing small relative risks, papers were more likely to report a comparison between extreme subsets of the data so as to report larger relative risks.
What Do These Findings Mean?
These findings suggest that there is a tendency among epidemiology researchers to highlight statistically significant findings and to avoid highlighting nonsignificant findings in their research papers. This behavior may be a problem, because many of these significant findings could in future turn out to be “false positives.” At present, registers exist for researchers to describe ongoing clinical trials, and to set out the outcomes that they plan to analyze for those trials. These registers will go some way towards addressing some of the problems described here, but only for clinical trials research. Registers do not yet exist for epidemiological studies, and therefore it is important that researchers and readers are aware of and cautious about the problem of selective reporting in epidemiological research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040079.
Wikipedia entry on publication bias (note: Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors gives guidelines for submitting manuscripts to its member journals, and includes comments about registration of ongoing studies and the obligation to publish negative studies
ClinicalTrials.gov and the ISRCTN register are two registries of ongoing clinical trials
doi:10.1371/journal.pmed.0040079
PMCID: PMC1808481  PMID: 17341129
18.  Publication trends of shared decision making in 15 high impact medical journals: a full-text review with bibliometric analysis 
Background
Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals.
Methods
We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase “shared decision making” or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics.
Results
We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively).
Conclusion
This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
doi:10.1186/1472-6947-14-71
PMCID: PMC4136407  PMID: 25106844
Shared decision making; Bibliometric analysis; Decision making; Full text search; Review; Information storage and retrieval; PubMed; Text mining
19.  Online journals' impact on the citation patterns of medical faculty 
Purpose: The purpose was to determine the impact of online journals on the citation patterns of medical faculty. This study looked at whether researchers were more likely to limit the resources they consulted and cited to those journals available online rather than those only in print.
Setting: Faculty publications from the college of medicine at a large urban university were examined for this study. The faculty publications from a regional medical college of the same university were also examined in the study. The number of online journals available for faculty, staff, and students at this institution has increased from an initial core of 15 online journals in 1998 to over 11,000 online journals in 2004.
Methodology: Searches by author affiliation were performed in the Web of Science to find all articles written by faculty members in the college of medicine at the selected institution. Searches were conducted for the following years: 1993, 1996, 1999, and 2002. Cited references from each faculty-authored article were recorded, and the corresponding cited journals were coded into four categories based on their availability at the institution in this study: print only, print and online, online only, and not owned. Results were analyzed using SPSS.
Results: The number of journals cited per year continued to increase from 1993 to 2002. The results did not indicate that researchers were more likely to cite online journals or were less likely to cite journals only in print. At the regional location where the number of print-only journals was minimal, use of the print-only journals did decrease in 2002, although not significantly.
Conclusion/Discussion: It is possible that electronic access to information (i.e., online databases) has had a positive impact on the number of articles faculty will cite. Results of this study suggest, at this point, that faculty are still accessing the print-only collection, at least for research purposes, and are therefore not sacrificing quality for convenience.
PMCID: PMC1082939  PMID: 15858625
20.  Open by default: a proposed copyright license and waiver agreement for open access research and data in peer-reviewed journals 
BMC Research Notes  2012;5:494.
Copyright and licensing of scientific data, internationally, are complex and present legal barriers to data sharing, integration and reuse, and therefore restrict the most efficient transfer and discovery of scientific knowledge. Much data are included within scientific journal articles, their published tables, additional files (supplementary material) and reference lists. However, these data are usually published under licenses which are not appropriate for data. Creative Commons CC0 is an appropriate and increasingly accepted method for dedicating data to the public domain, to enable data reuse with the minimum of restrictions. BioMed Central is committed to working towards implementation of open data-compliant licensing in its publications. Here we detail a protocol for implementing a combined Creative Commons Attribution license (for copyrightable material) and Creative Commons CC0 waiver (for data) agreement for content published in peer-reviewed open access journals. We explain the differences between legal requirements for attribution in copyright, and cultural requirements in scholarship for giving individuals credit for their work through citation. We argue that publishing data in scientific journals under CC0 will have numerous benefits for individuals and society, and yet will have minimal implications for authors and minimal impact on current publishing and research workflows. We provide practical examples and definitions of data types, such as XML and tabular data, and specific secondary use cases for published data, including text mining, reproducible research, and open bibliography. We believe this proposed change to the current copyright and licensing structure in science publishing will help clarify what users – people and machines – of the published literature can do, legally, with journal articles and make research using the published literature more efficient. We further believe this model could be adopted across multiple publishers, and invite comment on this article from all stakeholders in scientific research.
doi:10.1186/1756-0500-5-494
PMCID: PMC3465200  PMID: 22958225
21.  Welcome to the Journal of Neuroinflammation! 
Welcome to the Journal of Neuroinflammation, an open-access, peer-reviewed, online journal that focuses on innate immunological responses of the central nervous system, involving microglia, astrocytes, cytokines, chemokines, and related molecular processes. 'Neuroinflammation' is an encapsulization of the idea that microglial and astrocytic responses and actions in the central nervous system have a fundamentally inflammation-like character, and that these responses are central to the pathogenesis and progression of a wide variety of neurological disorders. This concept has its roots in the discoveries of inflammatory cytokines and proteins in the plaques of Alzheimer disease, and these ideas have been extended to other neurodegenerative diseases, to ischemic/toxic diseases, to tumor biology and even to normal brain development. The Journal of Neuroinflammation, published by BioMed Central, will bring together work focusing on microglia, astrocytes, cytokines, chemokines, and related molecular processes in the central nervous system. All articles published in the Journal of Neuroinflammation will be immediately listed in PubMed, and access to published articles will be universal and free through the internet.
doi:10.1186/1742-2094-1-1
PMCID: PMC483051  PMID: 15285806
22.  Measuring use patterns of online journals and databases 
Purpose: This research sought to determine use of online biomedical journals and databases and to assess current user characteristics associated with the use of online resources in an academic health sciences center.
Setting: The Library of the Health Sciences–Peoria is a regional site of the University of Illinois at Chicago (UIC) Library with 350 print journals, more than 4,000 online journals, and multiple online databases.
Methodology: A survey was designed to assess online journal use, print journal use, database use, computer literacy levels, and other library user characteristics. A survey was sent through campus mail to all (471) UIC Peoria faculty, residents, and students.
Results: Forty-one percent (188) of the surveys were returned. Ninety-eight percent of the students, faculty, and residents reported having convenient access to a computer connected to the Internet. While 53% of the users indicated they searched MEDLINE at least once a week, other databases showed much lower usage. Overall, 71% of respondents indicated a preference for online over print journals when possible.
Conclusions: Users prefer online resources to print, and many choose to access these online resources remotely. Convenience and full-text availability appear to play roles in selecting online resources. The findings of this study suggest that databases without links to full text and online journal collections without links from bibliographic databases will have lower use. These findings have implications for collection development, promotion of library resources, and end-user training.
PMCID: PMC153164  PMID: 12883574
23.  A new era for Italian Journal of Pediatrics 
On behalf of the Editorial Board, welcome to the new Italian Journal of Pediatrics, the official journal of the ISP/SIP (Italian Society of Pediatrics/Società Italiana di Pediatria), now publishing on BioMed Central's open access publishing platform. The move to BioMed Central will benefit authors by having their manuscripts published faster with rapid global dissemination. Readers will also benefit from free online access to the journal via the website and a range of full text archives.
doi:10.1186/1824-7288-34-1
PMCID: PMC2687536  PMID: 19490657
24.  Factors Associated with Findings of Published Trials of Drug–Drug Comparisons: Why Some Statins Appear More Efficacious than Others 
PLoS Medicine  2007;4(6):e184.
Background
Published pharmaceutical industry–sponsored trials are more likely than non-industry-sponsored trials to report results and conclusions that favor drug over placebo. Little is known about potential biases in drug–drug comparisons. This study examined associations between research funding source, study design characteristics aimed at reducing bias, and other factors that potentially influence results and conclusions in randomized controlled trials (RCTs) of statin–drug comparisons.
Methods and Findings
This is a cross-sectional study of 192 published RCTs comparing a statin drug to another statin drug or non-statin drug. Data on concealment of allocation, selection bias, blinding, sample size, disclosed funding source, financial ties of authors, results for primary outcomes, and author conclusions were extracted by two coders (weighted kappa 0.80 to 0.97). Univariate and multivariate logistic regression identified associations between independent variables and favorable results and conclusions. Of the RCTs, 50% (95/192) were funded by industry, and 37% (70/192) did not disclose any funding source. Looking at the totality of available evidence, we found that almost all studies (98%, 189/192) used only surrogate outcome measures. Moreover, study design weaknesses common to published statin–drug comparisons included inadequate blinding, lack of concealment of allocation, poor follow-up, and lack of intention-to-treat analyses. In multivariate analysis of the full sample, trials with adequate blinding were less likely to report results favoring the test drug, and sample size was associated with favorable conclusions when controlling for other factors. In multivariate analysis of industry-funded RCTs, funding from the test drug company was associated with results (odds ratio = 20.16 [95% confidence interval 4.37–92.98], p < 0.001) and conclusions (odds ratio = 34.55 [95% confidence interval 7.09–168.4], p < 0.001) that favor the test drug when controlling for other factors. Studies with adequate blinding were less likely to report statistically significant results favoring the test drug.
Conclusions
RCTs of head-to-head comparisons of statins with other drugs are more likely to report results and conclusions favoring the sponsor's product compared to the comparator drug. This bias in drug–drug comparison trials should be considered when making decisions regarding drug choice.
Lisa Bero and colleagues found published trials comparing one statin with another were more likely to report results and conclusions favoring the sponsor's product than the comparison drug.
Editors' Summary
Background.
Randomized controlled trials are generally considered to be the most reliable type of experimental study for evaluating the effectiveness of different treatments. Randomization involves the assignment of participants in the trial to different treatment groups by the play of chance. Properly done, this procedure means that the different groups are comparable at outset, reducing the chance that outside factors could be responsible for treatment effects seen in the trial. When done properly, randomization also ensures that the clinicians recruiting participants into the trial cannot know the treatment group to which a patient will end up being assigned. However, despite these advantages, a large number of factors can still result in bias creeping in. Bias comes about when the findings of research appear to differ in some systematic way from the true result. Other research studies have suggested that funding is a source of bias; studies sponsored by drug companies seem to more often favor the sponsor's drug than trials not sponsored by drug companies
Why Was This Study Done?
The researchers wanted to more precisely understand the impact of different possible sources of bias in the findings of randomized controlled trials. In particular, they wanted to study the outcomes of “head-to-head” drug comparison studies for one particular class of drugs, the statins. Drugs in this class are commonly prescribed to reduce the levels of cholesterol in blood amongst people who are at risk of heart and other types of disease. This drug class is a good example for studying the role of bias in drug–drug comparison trials, because these trials are extensively used in decision making by health-policy makers.
What Did the Researchers Do and Find?
This research study was based on searching PubMed, a biomedical literature database, with the aim of finding all randomized controlled trials of statins carried out between January 1999 and May 2005 (reference lists also were searched). Only trials which compared one statin to another statin or one statin to another type of drug were included. The researchers extracted the following information from each article: the study's source of funding, aspects of study design, the overall results, and the authors' conclusions. The results were categorized to show whether the findings were favorable to the test drug (the newer statin), inconclusive, or not favorable to the test drug. Aspects of each study's design were also categorized in relation to various features, such as how well the randomization was done (in particular, the degree to which the processes used would have prevented physicians from knowing which treatment a patient was likely to receive on enrollment); whether all participants enrolled in the trial were eventually analyzed; and whether investigators or participants knew what treatment an individual was receiving.
One hundred and ninety-two trials were included in this study, and of these, 95 declared drug company funding; 23 declared government or other nonprofit funding while 74 did not declare funding or were not funded. Trials that were properly blinded (where participants and investigators did not know what treatment an individual received) were less likely to have conclusions favoring the test drug. However, large trials were more likely to favor the test drug than smaller trials. When looking specifically at the trials funded by drug companies, the researchers found various factors that predicted whether a result or conclusion favored the test drug. These included the impact of the journal publishing the results; the size of the trial; and whether funding came from the maker of the test drug. However, properly blinded trials were less likely to produce results favoring the test drug. Even once all other factors were accounted for, the funding source for the study was still linked with results and conclusions that favored the maker of the test drug.
What Do These Findings Mean?
This study shows that the type of sponsorship available for randomized controlled trials of statins was strongly linked to the results and conclusions of those studies, even when other factors were taken into account. However, it is not clear from this study why sponsorship has such a strong link to the overall findings. There are many possible reasons why this might be. Some people have suggested that drug companies may deliberately choose lower dosages for the comparison drug when they carry out “head-to-head” trials; this tactic is likely to result in the company's product doing better in the trial. Others have suggested that trials which produce unfavorable results are not published, or that unfavorable outcomes are suppressed. Whatever the reasons for these findings, the implications are important, and suggest that the evidence base relating to statins may be substantially biased.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040184.
The James Lind Library has been created to help people understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
The International Committee of Medical Journal Editors has provided guidance regarding sponsorship, authorship, and accountability
The CONSORT statement is a research tool that provides an evidence-based approach for reporting the results of randomized controlled trials
Good Publication Practice guidelines provide standards for responsible publication of research sponsored by pharmaceutical companies
Information from Wikipedia on Statins. Wikipedia is an internet encyclopedia anyone can edit
doi:10.1371/journal.pmed.0040184
PMCID: PMC1885451  PMID: 17550302
25.  New journal: Algorithms for Molecular Biology 
This editorial announces Algorithms for Molecular Biology, a new online open access journal published by BioMed Central. By launching the first open access journal on algorithmic bioinformatics, we provide a forum for fast publication of high-quality research articles in this rapidly evolving field. Our journal will publish thoroughly peer-reviewed papers without length limitations covering all aspects of algorithmic data analysis in computatioal biology. Publications in Algorithms for Molecular Biology are easy to find, highly visible and tracked by organisations such as PubMed. An established online submission system makes a fast reviewing procedure possible and enables us to publish accepted papers without delay. All articles published in our journal are permanently archived by PubMed Central and other scientific archives. We are looking forward to receiving your contributions.
doi:10.1186/1748-7188-1-1
PMCID: PMC1435992  PMID: 16722576

Results 1-25 (488336)