Search tips
Search criteria

Results 1-25 (1224610)

Clipboard (0)

Related Articles

1.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
PMCID: PMC1796627  PMID: 17411314
2.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
PMCID: PMC2864302  PMID: 20454570
3.  Ghost Authorship in Industry-Initiated Randomised Trials 
PLoS Medicine  2007;4(1):e19.
Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known.
Methods and Findings
We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors.
Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Of 44 industry-initiated trials, there was evidence of ghost authorship in 33, increasing to 40 when a person qualifying for authorship was acknowledged rather than appearing as an author.
Editors' Summary
Original scientific findings are usually published in the form of a “paper”, whether it is actually distributed on paper, or circulated via the internet, as this one is. Papers are normally prepared by a group of researchers who did the research and are then listed at the top of the article. These authors therefore take responsibility for the integrity of the results and interpretation of them. However, many people are worried that sometimes the author list on the paper does not tell the true story of who was involved. In particular, for clinical research, case histories and previous research has suggested that “ghost authorship” is commonplace. Ghost authors are people who were involved in some way in the research study, or writing the paper, but who have been left off the final author list. This might happen because the study “looks” more credible if the true authors (for example, company employees or freelance medical writers) are not revealed. This practice might hide competing interests that readers should be aware of, and has therefore been condemned by academics, groups of editors, and some pharmaceutical companies.
Why Was This Study Done?
This group of researchers wanted to get an idea of how often ghost authorship happened in medical research done by companies. Previous studies looking into this used surveys, whereby the researchers would write to one author on each of a group of papers to ask whether anyone else had been involved in the work but who was not listed on the paper. These sorts of studies typically underestimate the rate of ghost authorship, because the main author might not want to admit what had been going on. However, the researchers here managed to get access to trial protocols (documents setting out the plans for future research studies), which gave them a way to investigate ghost authorship.
What Did the Researchers Do and Find?
In order to investigate the frequency and type of ghost authorship, these researchers identified every trial which was approved between 1994 and 1995 by the ethics committees of Copenhagen and Frederiksberg in Denmark. Then they winnowed this group down to include only the trials that were sponsored by industry (pharmaceutical companies and others), and only those trials that were finished and published. The protocols for each trial were obtained from the ethics committees and the researchers then matched up each protocol with its corresponding paper. Then, they compared names which appeared in the protocol against names appearing on the eventual paper, either on the author list or acknowledged elsewhere in the paper as being involved. The researchers ended up studying 44 trials. For 31 of these (75% of them) they found some evidence of ghost authorship, in that people were identified as having written the protocol or who had been involved in doing statistical analyses or writing the manuscript, but did not end up listed in the manuscript. If the definition of authorship was made narrower, and “ghost authorship” included people qualifying for authorship who were mentioned in the acknowledgements but not the author list, the researchers' estimate went up to 91%, that is 40 of the 44 trials. For most of the trials with missing authors, the ghost was a statistician (the person who analyzes the trial data).
What Do These Findings Mean?
In this study, the researchers found that ghost authorship was very common in papers published in medical journals (this study covered a broad range of peer-reviewed journals in many medical disciplines). The method used in this paper seems more reliable than using surveys to work out how often ghost authorship happens. The researchers aimed to define authorship using the policies set out by a group called the International Committee of Medical Journal Editors (ICMJE), and the findings here suggest that the ICMJE's standards for authorship are very often ignored. This means that people who read the published paper cannot always accurately judge or trust the information presented within it, and competing interests may be hidden. The researchers here suggest that protocols should be made publicly available so that everyone can see what trials are planned and who is involved in conducting them. The findings also suggest that journals should not only list the authors of each paper but describe what each author has done, so that the published information accurately reflects what has been carried out.
Additional Information.
Please access these Web sites via the online version of this summary at
Read the Perspective by Liz Wager, which discusses these findings in more depth
The International Committee of Medical Journal Editors (ICMJE) is a group of general medical journal editors who have produced general guidelines for biomedical manuscripts; their definition of authorship is also described
The Committee on Publication Ethics is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record; the Web site lists anonymized problems and the committee's advice, not just regarding authorship, but other types of problems as well
Good Publication Practice for Pharmaceutical Companies outlines common standards for publication of industry-sponsored medical research, and some pharmaceutical companies have agreed to these
PMCID: PMC1769411  PMID: 17227134
4.  Reviewing Manuscripts for Biomedical Journals 
The Permanente journal  2010;14(1):32-40.
Writing for publication is a complex task. For many professionals, producing a well-executed manuscript conveying one's research, ideas, or educational wisdom is challenging. Authors have varying emotions related to the process of writing for scientific publication. Although not studied, a relationship between an author's enjoyment of the writing process and the product's outcome is highly likely. As with any skill, practice generally results in improvements. Literature focused on preparing manuscripts for publication and the art of reviewing submissions exists. Most journals guard their reviewers' anonymity with respect to the manuscript review process. This is meant to protect them from direct or indirect author demands, which may occur during the review process or in the future. It is generally accepted that author identities are masked in the peer-review process. However, the concept of anonymity for reviewers has been debated recently; many editors consider it problematic that reviewers are not held accountable to the public for their decisions. The review process is often arduous and underappreciated, one reason why biomedical journals acknowledge editors and frequently recognize reviewers who donate their time and expertise in the name of science. This article describes essential elements of a submitted manuscript, with the hopes of improving scientific writing. It also discusses the review process within the biomedical literature, the importance of reviewers to the scientific process, responsibilities of reviewers, and qualities of a good review and reviewer. In addition, it includes useful insights to individuals who read and interpret the medical literature.
PMCID: PMC2912703  PMID: 20740129
5.  Incorporating Scientific Publishing into an Undergraduate Neuroscience Course: A Case Study Using IMPULSE 
The journal IMPULSE offers undergraduates worldwide the opportunity to publish research and serve as peer reviewers for the submissions of others. Undergraduate faculty have recognized the journal’s value in engaging students working in their labs in the publication process. However, integration of scientific publication into an undergraduate laboratory classroom setting has been lacking. We report here on a course at Ursinus College where 20 students taking Molecular Neurobiology were required to submit manuscripts to IMPULSE. The syllabus allowed for the laboratory research to coincide with the background research and writing of the manuscript. Students completed their projects on the impact of drugs on the Daphnia magna nervous system while producing manuscripts ready for submission by week 7 of the course. Findings from a survey completed by the students and perceptions of the faculty member teaching the course indicated that students spent much more time writing, were more focused on completing the assays, completed the assays with larger data sets, were more engaged in learning the scientific concepts and were more thorough with their revisions of the paper knowing that it might be published. Further, the professor found she was more thorough in critiquing students’ papers knowing they would be externally reviewed. Incorporating journal submission into the course stimulated an in depth writing experience and allowed for a deeper exploration of the topic than students would have experienced otherwise. This case study provides evidence that IMPULSE can be successfully used as a means of incorporating scientific publication into an undergraduate laboratory science course. This approach to teaching undergraduate neuroscience allows for a larger number of students to have hands-on research and scientific publishing experience than would be possible with the current model of a few students in a faculty member’s laboratory. This report illustrates that IMPULSE can be incorporated as an integral part of an academic curriculum with positive outcomes on student engagement and performance.
PMCID: PMC3592724  PMID: 23494013
teaching; writing; research; peer review
6.  Manuscript Architect: a Web application for scientific writing in virtual interdisciplinary groups 
Although scientific writing plays a central role in the communication of clinical research findings and consumes a significant amount of time from clinical researchers, few Web applications have been designed to systematically improve the writing process.
This application had as its main objective the separation of the multiple tasks associated with scientific writing into smaller components. It was also aimed at providing a mechanism where sections of the manuscript (text blocks) could be assigned to different specialists. Manuscript Architect was built using Java language in conjunction with the classic lifecycle development method. The interface was designed for simplicity and economy of movements. Manuscripts are divided into multiple text blocks that can be assigned to different co-authors by the first author. Each text block contains notes to guide co-authors regarding the central focus of each text block, previous examples, and an additional field for translation when the initial text is written in a language different from the one used by the target journal. Usability was evaluated using formal usability tests and field observations.
The application presented excellent usability and integration with the regular writing habits of experienced researchers. Workshops were developed to train novice researchers, presenting an accelerated learning curve. The application has been used in over 20 different scientific articles and grant proposals.
The current version of Manuscript Architect has proven to be very useful in the writing of multiple scientific texts, suggesting that virtual writing by interdisciplinary groups is an effective manner of scientific writing when interdisciplinary work is required.
PMCID: PMC1180829  PMID: 15960855
7.  Evaluation Criteria for Publishing in Top-Tier Journals in Environmental Health Sciences and Toxicology 
Environmental Health Perspectives  2011;119(7):896-899.
Background: Trying to publish a paper in a top-rated peer-reviewed journal can be a difficult and frustrating experience for authors. It is important that authors understand the general review process before submitting manuscripts for publication.
Objectives: Editors-in-chief and associate editors from top-tier journals such as Environmental Health Perspectives (EHP), Toxicological Sciences, Journal of Pharmacology and Experimental Therapeutics, and Chemical Research in Toxicology were asked to provide guidance concerning the writing and submission of papers to their journals.
Discussion: The editors reviewed the manuscript review process for their journals, elaborated on the evaluation criteria for reviewing papers, and provided advice for future authors in preparing their papers.
Conclusions: The manuscript submission process was similar for all of the journals with the exception of EHP that includes an initial screening in which about two-thirds of submitted papers are returned to the authors without review. The evaluation criteria used by the journals were also similar. Papers that are relevant to the scope of the journal, are innovative, significantly advance the field, are well written, and adhere to the instructions to authors have a higher likelihood of being accepted. The editors advised potential authors to ensure that the topic of the paper is within the scope of the journal, represents an important problem, is carefully prepared according to the instructions to authors, and to seek editorial assistance if English is not the primary language of the authors.
PMCID: PMC3222983  PMID: 21414890
environmental health sciences; evaluation criteria; peer review; top-tier journals; toxicology
8.  Representation and Misrepresentation of Scientific Evidence in Contemporary Tobacco Regulation: A Review of Tobacco Industry Submissions to the UK Government Consultation on Standardised Packaging 
PLoS Medicine  2014;11(3):e1001629.
Selda Ulucanlar and colleagues analyze submissions by two tobacco companies to the UK government consultation on standardized packaging.
Please see later in the article for the Editors' Summary
Standardised packaging (SP) of tobacco products is an innovative tobacco control measure opposed by transnational tobacco companies (TTCs) whose responses to the UK government's public consultation on SP argued that evidence was inadequate to support implementing the measure. The government's initial decision, announced 11 months after the consultation closed, was to wait for ‘more evidence’, but four months later a second ‘independent review’ was launched. In view of the centrality of evidence to debates over SP and TTCs' history of denying harms and manufacturing uncertainty about scientific evidence, we analysed their submissions to examine how they used evidence to oppose SP.
Methods and Findings
We purposively selected and analysed two TTC submissions using a verification-oriented cross-documentary method to ascertain how published studies were used and interpretive analysis with a constructivist grounded theory approach to examine the conceptual significance of TTC critiques. The companies' overall argument was that the SP evidence base was seriously flawed and did not warrant the introduction of SP. However, this argument was underpinned by three complementary techniques that misrepresented the evidence base. First, published studies were repeatedly misquoted, distorting the main messages. Second, ‘mimicked scientific critique’ was used to undermine evidence; this form of critique insisted on methodological perfection, rejected methodological pluralism, adopted a litigation (not scientific) model, and was not rigorous. Third, TTCs engaged in ‘evidential landscaping’, promoting a parallel evidence base to deflect attention from SP and excluding company-held evidence relevant to SP. The study's sample was limited to sub-sections of two out of four submissions, but leaked industry documents suggest at least one other company used a similar approach.
The TTCs' claim that SP will not lead to public health benefits is largely without foundation. The tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for highly resourced corporations to slow, weaken, or prevent public health policies.
Please see later in the article for the Editors' Summary
Editors' Summary
Every year, about 6 million people die from tobacco-related diseases and, if current trends continue, annual tobacco-related deaths will increase to more than 8 million by 2030. To reduce this loss of life, national and international bodies have drawn up various conventions and directives designed to implement tobacco control measures such as the adoption of taxation policies aimed at reducing tobacco consumption and bans on tobacco advertising, promotion, and sponsorship. One innovative but largely unused tobacco control measure is standardised packaging of tobacco products. Standardised packaging aims to prevent the use of packaging as a marketing tool by removing all brand imagery and text (other than name) and by introducing packs of a standard shape and colour that include prominent pictorial health warnings. Standardised packaging was first suggested as a tobacco control measure in 1986 but has been consistently opposed by the tobacco industry.
Why Was This Study Done?
The UK is currently considering standardised packaging of tobacco products. In the UK, Better Regulation guidance obliges officials to seek the views of stakeholders, including corporations, on the government's cost and benefit estimates of regulatory measures such as standardised packaging and on the evidence underlying these estimates. In response to a public consultation about standardised packaging in July 2013, which considered submissions from several transnational tobacco companies (TTCs), the UK government announced that it would wait for the results of the standardised packaging legislation that Australia adopted in December 2012 before making its final decision about this tobacco control measure. Parliamentary debates and media statements have suggested that doubt over the adequacy of the evidence was the main reason for this ‘wait and see’ decision. Notably, TTCs have a history of manufacturing uncertainty about the scientific evidence related to the harms of tobacco. Given the centrality of evidence to the debate about standardised packaging, in this study, the researchers analyse submissions made by two TTCs, British American Tobacco (BAT) and Japan Tobacco International (JTI), to the first UK consultation on standardised packaging (a second review is currently underway and will report shortly) to examine how TTCs used evidence to oppose standardised packaging.
What Did the Researchers Do and Find?
The researchers analysed sub-sections of two of the four TTC submissions (those submitted by BAT and JTI) made to the public consultation using verification-oriented cross-documentary analysis, which compared references made to published sources with the original sources to ascertain how these sources had been used, and interpretative analysis to examine the conceptual significance of TTC critiques of the evidence on standardised packaging. The researchers report that the companies' overall argument was that the evidence base in support of standardised packaging was seriously flawed and did not warrant the introduction of such packaging. The researchers identified three ways in which the TTC reports misrepresented the evidence base. First, the TTCs misquoted published studies, thereby distorting the main messages of these studies. For example, the TTCs sometimes omitted important qualifying information when quoting from published studies. Second, the TTCs undermined evidence by employing experts to review published studies for methodological rigor and value in ways that did not conform to normal scientific critique approaches (‘mimicked scientific critique’). So, for example, the experts considered each piece of evidence in isolation for its ability to support standardised packaging rather than considering the cumulative weight of the evidence. Finally, the TTCs engaged in ‘evidential landscaping’. That is, they promoted research that deflected attention from standardised packaging (for example, research into social explanations of smoking behaviour) and omitted internal industry research on the role of packaging in marketing.
What Do These Findings Mean?
These findings suggest that the TTC critique of the evidence in favour of standardised packaging that was presented to the UK public consultation on this tobacco control measure is highly misleading. However, because the researchers' analysis only considered subsections of the submissions from two TTCs, these findings may not be applicable to the other submissions or to other TTCs. Moreover, their analysis only considered the efforts made by TTCs to influence public health policy and not the effectiveness of these efforts. Nevertheless, these findings suggest that the claim of TTCs that standardised packaging will not lead to public health benefits is largely without foundation. More generally, these findings highlight the possibility that the tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for wealthy corporations to slow, weaken, or prevent the implementation of public health policies.
Additional Information
Please access these websites via the online version of this summary at
The World Health Organization provides information about the dangers of tobacco (in several languages) and an article about first experiences with Australia's tobacco plain packaging law; for information about the tobacco industry's influence on policy, see the 2009 World Health Organization report ‘Tobacco industry interference with tobacco control’
A UK parliamentary briefing on standardised packaging of tobacco products, a press release about the consultation, and a summary report of the consultation are available; the ideas behind the UK's Better Regulation guidance are described in a leaflet produced by the Better Regulation Task Force
Cancer Research UK (CRUK) has a web page with information on standardised packaging and includes videos
Wikipedia has a page on standardised packaging of tobacco products (note: Wikipedia is a free online encyclopaedia that anyone can edit; available in several languages)
The UK Centre for Tobacco Control Studies is a network of UK universities that undertakes original research, policy development, advocacy, and teaching and training in the field of tobacco control, an online resource managed by the University of Bath, provides up-to-date information on the tobacco industry and the tactics it uses to influence tobacco regulation
SmokeFree, a website provided by the UK National Health Service, offers advice on quitting smoking and includes personal stories from people who have stopped smoking, from the US National Cancer Institute, offers online tools and resources to help people quit smoking
PMCID: PMC3965396  PMID: 24667150
9.  The Usefulness of Peer Review for Selecting Manuscripts for Publication: A Utility Analysis Taking as an Example a High-Impact Journal 
PLoS ONE  2010;5(6):e11344.
High predictive validity – that is, a strong association between the outcome of peer review (usually, reviewers' ratings) and the scientific quality of a manuscript submitted to a journal (measured as citations of the later published paper) – does not as a rule suffice to demonstrate the usefulness of peer review for the selection of manuscripts. To assess usefulness, it is important to include in addition the base rate (proportion of submissions that are fundamentally suitable for publication) and the selection rate (the proportion of submissions accepted).
Methodology/Principal Findings
Taking the example of the high-impact journal Angewandte Chemie International Edition (AC-IE), we present a general approach for determining the usefulness of peer reviews for the selection of manuscripts for publication. The results of our study show that peer review is useful: 78% of the submissions accepted by AC-IE are correctly accepted for publication when the editor's decision is based on one review, 69% of the submissions are correctly accepted for publication when the editor's decision is based on two reviews, and 65% of the submissions are correctly accepted for publication when the editor's decision is based on three reviews.
The paper points out through what changes in the selection rate, base rate or validity coefficient a higher success rate (utility) in the AC-IE selection process could be achieved.
PMCID: PMC2893207  PMID: 20596540
10.  Systematic Differences in Impact across Publication Tracks at PNAS 
PLoS ONE  2009;4(12):e8092.
Citation data can be used to evaluate the editorial policies and procedures of scientific journals. Here we investigate citation counts for the three different publication tracks of the Proceedings of the National Academy of Sciences of the United States of America (PNAS). This analysis explores the consequences of differences in editor and referee selection, while controlling for the prestige of the journal in which the papers appear.
Methodology/Principal Findings
We find that papers authored and “Contributed” by NAS members (Track III) are on average cited less often than papers that are “Communicated” for others by NAS members (Track I) or submitted directly via the standard peer review process (Track II). However, we also find that the variance in the citation count of Contributed papers, and to a lesser extent Communicated papers, is larger than for direct submissions. Therefore when examining the 10% most-cited papers from each track, Contributed papers receive the most citations, followed by Communicated papers, while Direct submissions receive the least citations.
Our findings suggest that PNAS “Contributed” papers, in which NAS–member authors select their own reviewers, balance an overall lower impact with an increased probability of publishing exceptional papers. This analysis demonstrates that different editorial procedures are associated with different levels of impact, even within the same prominent journal, and raises interesting questions about the most appropriate metrics for judging an editorial policy's success.
PMCID: PMC2778996  PMID: 19956649
11.  Imbalance in Individual Researcher's Peer Review Activities Quantified for Four British Ecological Society Journals, 2003-2010 
PLoS ONE  2014;9(3):e92896.
Researchers contribute to the scientific peer review system by providing reviews, and “withdraw” from it by submitting manuscripts that are subsequently reviewed. So far as we are aware, there has been no quantification of the balance of individual's contributions and withdrawals. We compared the number of reviews provided by individual researchers (i.e., their contribution) to the number required by their submissions (i.e. their withdrawals) in a large and anonymised database provided by the British Ecological Society. The database covered the Journal of Ecology, Journal of Animal Ecology, Journal of Applied Ecology, and Functional Ecology from 2003–2010. The majority of researchers (64%) did not have balanced contributions and withdrawals. Depending on assumptions, 12% to 44% contributed more than twice as much as required; 20% to 52% contributed less than half as much as required. Balance, or lack thereof, varied little in relation to the number of years a researcher had been active (reviewing or submitting). Researchers who contributed less than required did not lack the opportunity to review. Researchers who submitted more were more likely to accept invitations to review. These finding suggest overall that peer review of the four analysed journals is not in crisis, but only due to the favourable balance of over- and under-contributing researchers. These findings are limited to the four journals analysed, and therefore cannot include researcher's other peer review activities, which if included might change the proportions reported. Relatively low effort was required to assemble, check, and analyse the data. Broader analyses of individual researcher's peer review activities would contribute to greater quality, efficiency, and fairness in the peer review system.
PMCID: PMC3962470  PMID: 24658631
12.  Streamlined research funding using short proposals and accelerated peer review: an observational study 
Despite the widely recognised importance of sustainable health care systems, health services research remains generally underfunded in Australia. The Australian Centre for Health Services Innovation (AusHSI) is funding health services research in the state of Queensland. AusHSI has developed a streamlined protocol for applying and awarding funding using a short proposal and accelerated peer review.
An observational study of proposals for four health services research funding rounds from May 2012 to November 2013. A short proposal of less than 1,200 words was submitted using a secure web-based portal. The primary outcome measures are: time spent preparing proposals; a simplified scoring of grant proposals (reject, revise or accept for interview) by a scientific review committee; and progressing from submission to funding outcomes within eight weeks. Proposals outside of health services research were deemed ineligible.
There were 228 eligible proposals across 4 funding rounds: from 29% to 79% were shortlisted and 9% to 32% were accepted for interview. Success rates increased from 6% (in 2012) to 16% (in 2013) of eligible proposals. Applicants were notified of the outcomes within two weeks from the interview; which was a maximum of eight weeks after the submission deadline. Applicants spent 7 days on average preparing their proposal. Applicants with a ranking of reject or revise received written feedback and suggested improvements for their proposals, and resubmissions composed one third of the 2013 rounds.
The AusHSI funding scheme is a streamlined application process that has simplified the process of allocating health services research funding for both applicants and peer reviewers. The AusHSI process has minimised the time from submission to notification of funding outcomes.
PMCID: PMC4324047
13.  Processes to manage analyses and publications in a phase III multicenter randomized clinical trial 
Trials  2014;15:159.
The timely publication of findings in peer-reviewed journals is a primary goal of clinical research. In clinical trials, the processes leading to publication can be complex from choice and prioritization of analytic topics through to journal submission and revisions. As little literature exists on the publication process for multicenter trials, we describe the development, implementation, and effectiveness of such a process in a multicenter trial.
The Hepatitis C Antiviral Long-Term Treatment against Cirrhosis (HALT-C) trial included a data coordinating center (DCC) and clinical centers that recruited and followed more than 1,000 patients. Publication guidelines were approved by the steering committee, and the publications committee monitored the publication process from selection of topics to publication.
A total of 73 manuscripts were published in 23 peer-reviewed journals. When manuscripts were closely tracked, the median time for analyses and drafting of manuscripts was 8 months. The median time for data analyses was 5 months and the median time for manuscript drafting was 3 months. The median time for publications committee review, submission, and journal acceptance was 7 months, and the median time from analytic start to journal acceptance was 18 months.
Effective publication guidelines must be comprehensive, implemented early in a trial, and require active management by study investigators. Successful collaboration, such as in the HALT-C trial, can serve as a model for others involved in multidisciplinary and multicenter research programs.
Trial registration
The HALT-C Trial was registered with (NCT00006164).
PMCID: PMC4040510  PMID: 24886378
Publication guidelines; Publication processes; Publication management; HALT-C trial; Authorship assignment; Authorship allocation
14.  Content and communication: How can peer review provide helpful feedback about the writing? 
Peer review is assumed to improve the quality of research reports as tools for scientific communication, yet strong evidence that this outcome is obtained consistently has been elusive. Failure to distinguish between aspects of discipline-specific content and aspects of the writing or use of language may account for some deficiencies in current peer review processes.
The process and outcomes of peer review may be analyzed along two dimensions: 1) identifying scientific or technical content that is useful to other researchers (i.e., its "screening" function), and 2) improving research articles as tools for communication (i.e., its "improving" function). However, editors and reviewers do not always distinguish clearly between content criteria and writing criteria. When peer reviewers confuse content and writing, their feedback can be misunderstood by authors, who may modify texts in ways that do not make the readers' job easier. When researchers in peer review confuse the two dimensions, this can lead to content validity problems that foil attempts to define informative variables and outcome measures, and thus prevent clear trends from emerging. Research on writing, revising and editing suggests some reasons why peer review is not always as effective as it might be in improving what is written.
Peer review could be improved if stakeholders were more aware of variations in gatekeepers' (reviewers' and editors') ability to provide feedback about the content or the writing. Gatekeepers, academic literacy researchers, and wordface professionals (author's editors, medical writers and translators) could work together to discover the types of feedback authors find most useful. I offer suggestions to help editologists design better studies of peer review which could make the process an even stronger tool for manuscript improvement than it is now.
PMCID: PMC2268697  PMID: 18237378
15.  Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial 
PLoS ONE  2007;2(3):e332.
Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both.
Methodology and Principal Findings
Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with “no statistical expert” and “no checklist” as controls. The two interventions were crossed in a 2×2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6–24.4). The effect of suggesting a guideline to the reviewers had no effect on change in overall quality as measured by the Goodman scale (0.9, 95% CI: −0.3–+2.1). The estimated effect of adding a statistical reviewer was 5.5 (95% CI: 4.3–6.7), showing a significant improvement in quality.
Conclusions and Significance
This prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines.
PMCID: PMC1824709  PMID: 17389922
16.  How to write an article: Preparing a publishable manuscript! 
CytoJournal  2012;9:1.
Most of the scientific work presented as abstracts (platforms and posters) at various conferences have the potential to be published as articles in peer-reviewed journals. This DIY (Do It Yourself) article on how to achieve that goal is an extension of the symposium presented at the 36th European Congress of Cytology, Istanbul, Turkey (presentation available on net at The criteria for manuscript authorship should be based on the ICMJE (International Committee of Medical Journal Editors) Uniform Requirements for Manuscripts. The next step is to choose the appropriate journal to submit the manuscript and review the ‘Instructions to the authors’ for that journal. Although initially it may appear to be an insurmountable task, diligent organizational discipline with a little patience and perseverance with input from mentors should lead to the preparation of a nearly perfect publishable manuscript even by a novice. Ultimately, the published article is an excellent track record of academic productivity with contribution to the general public good by encouraging the exchange of experience and innovation. It is a highly rewarding conduit to the personal success and growth leading to the collective achievement of continued scientific progress. Recent emergences of journals and publishers offering the platform and opportunity to publish under an open access charter provides the opportunity for authors to protect their copyright from being lost to conventional publishers. Publishing your work on this open platform is the most rewarding mission and is the recommended option in the current modern era.
[This open access article can be linked (copy-paste link from HTML version of this article) or reproduced FREELY if original reference details are prominently identifiable].
PMCID: PMC3280045  PMID: 22363390
Author; cytopathology; manuscript; publish; research; reviewer
Acta Informatica Medica  2012;20(3):141-148.
In this paper author discussed about preparing and submitting manuscripts - scientific, research, professional papers, reviews and case reports. Author described it from the Editor’s perspective, and specially talked about ethical aspects of authorship, conflict of interest, copyright, plagiarism and duplicate publication from the point of view of his experiences as Editor-in-Chief of several biomedical journals and Chief of Task Force of European Federation of Medical Informatics journals and member of Task Force of European Cardiology Society journals. The scientific process relies on trust and credibility. The scientific community demands high ethical standards to conduct biomedical research and to publish scientific contents. During the last decade, disclosure of conflicts of interest (COI ), (also called competing loyalties, competing interests or dual commitments), has been considered as a key element to guarantee the credibility of the scientific process. Biases in design, analysis and interpretation of studies may arise when authors or sponsors have vested interests. Therefore, COI should be made clear to the readers to facilitate their own judgment and interpretation of their relevance and potential implications.
Results and Discussion:
Authors are responsible to fully disclose potential COI . In October 2009 the ICMJE proposed an electronic “uniform” format for COI disclosure. Four main areas were addressed: authors´ associations with entities that supported the submitted manuscript (indefinite time frame), associations with commercial entities with potential interest in the general area of the manuscript (time frame 36 months), financial association of their spouse and children and, finally, non-financial associations potentially relevant to the submitted manuscript. Consumers of medical scholarship expect a reliable system of disclosure in which journals and authors make disclosures appropriately and consistently. There is a stigma surrounding the reporting of COI that should be progressively overcome. Further actions are required to increase awareness of the importance of COI disclosure and to promote policies aimed to enhance transparency in biomedical research. In this article author discuss about important ethical dilemmas in preparing, writing and publishing of scientific manuscripts in biomedical journals.
PMCID: PMC3508847  PMID: 23322969
medical science; biomedical journals; ethics; authorship; acknowledgement; conflict of interest; copyright; plagiarism; duplicate publication.
18.  Our silent enemy: ashes in our libraries 
Scholars, scientists, physicians, other health professionals, and librarians face a crucial decision today: shall we nourish the biomedical archives as a viable and indispensable source of information, or shall we bury their ashes and lose a century or more of consequential scientific history? Biomedical books and journals published since the 1850s on self-destructing acidic paper are silently and insidiously scorching on our shelves. The associated risks for scientists and physicians are serious—incomplete assessment of past knowledge; unnecessary repetition of studies that have already led to conclusive results; delay in scientific advances when important concepts, techniques, instruments, and procedures are overlooked; faulty comparative analyses; or improper assignment of priority.
The archives also disclose the nature of biomedical research, which builds on past knowledge, advances incrementally, and is strewn with missteps, frustrations, detours, inconsistencies, enigmas, and contradictions. The public's familiarity with the scientific process will avoid unrealistic expectations and will encourage support for research in health. But a proper historical perspective requires access to the biomedical archives. Since journals will apparently continue to be published on paper, it is folly to persist in the use of acidic paper and thus magnify for future librarians and preservationists the already Sisyphean and costly task of deacidifying their collections.
Our plea for conversion to acid-free paper is accompanied by an equally strong appeal for more rigorous criteria for journal publication. The glut of journal articles—many superficial, redundant, mediocre, or otherwise flawed and some even fraudulent—has overloaded our databases, complicated bibliographic research, and exacerbated the preservation problem. Before accepting articles, journal editors should ask: If it is not worth preserving, is it worth publishing?
It is our responsibility to protect the integrity of our biomedical records against all threats. Authors should consider submitting manuscripts to journals that use acid-free paper, especially if they think, as most authors do, that they are writing for posterity. Librarians can refuse to purchase journals published on acidic paper, which they know will need restoration within a few decades and will thus help deplete their budgets. All of us can urge our government to devise a coordinated national conservation policy that will halt the destruction of a century of our historical record. The battle will not be easy, but the challenge beckons urgently. The choice is ours: we can answer the call, or we can deny scientists, physicians, and historians the records they need to expand human knowledge and improve health care.
PMCID: PMC227429  PMID: 2758179
19.  A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine 
Head & Face Medicine  2007;3:27.
Head & Face Medicine (HFM) was launched in August 2005 to provide multidisciplinary science in the field of head and face disorders with an open access and open peer review publication platform. The objective of this study is to evaluate the characteristics of submissions, the effectiveness of open peer reviewing, and factors biasing the acceptance or rejection of submitted manuscripts.
A 1-year period of submissions and all concomitant journal operations were retrospectively analyzed. The analysis included submission rate, reviewer rate, acceptance rate, article type, and differences in duration for peer reviewing, final decision, publishing, and PubMed inclusion. Statistical analysis included Mann-Whitney U test, Chi-square test, regression analysis, and binary logistic regression.
HFM received 126 articles (10.5 articles/month) for consideration in the first year. Submissions have been increasing, but not significantly over time. Peer reviewing was completed for 82 articles and resulted in an acceptance rate of 48.8%. In total, 431 peer reviewers were invited (5.3/manuscript), of which 40.4% agreed to review. The mean peer review time was 37.8 days. The mean time between submission and acceptance (including time for revision) was 95.9 days. Accepted papers were published on average 99.3 days after submission. The mean time between manuscript submission and PubMed inclusion was 101.3 days. The main article types submitted to HFM were original research, reviews, and case reports. The article type had no influence on rejection or acceptance. The variable 'number of invited reviewers' was the only significant (p < 0.05) predictor for rejection of manuscripts.
The positive trend in submissions confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers. None of the other parameters tested had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, is free of 'editorial bias' in accepting manuscripts.
Original data
Provided as a downloadable tab-delimited text file (URL and variable code available under section 'additional files').
PMCID: PMC1913501  PMID: 17562003
20.  Electronic Submission of Academic Works: A Survey of Current Editorial Practices of Radiologic Journals  
Journal of Digital Imaging  2001;14(2):107-110.
Computers are nearly ubiquitous in academic medicine, and authors create and compile much of their work in the electronic environment, yet the process of manuscript submission often fails to utilize the advantages of electronic communication. The purpose of this report is to review the submission policies of major academic journals in the field of radiology and assess current editorial practices relating to electronic submission of academic works. The authors surveyed 16 radiologic journals that are indexed in the Index Medicus and available in our medical center library. They compared the manuscript submission policies of these journals as outlined in recent issues of the journals and the corresponding worldwide web sites. The authors compared the journals on the following criteria: web site access to instructions; electronic submission of text, both with regard to initial submission and final submission of the approved document; text hardcopy requirements; word processing software restrictions; electronic submission of figures, figure hardcopy requirements; figure file format restrictions; and electronic submission media. Although the trend seems to be toward electronic submission, there currently is no clear-cut standard of practice. Because all of the journals that accept electronic documents also require a hardcopy, many of the advantages gained through electronic submission are nullified. In addition, many publishers only utilize electronic documents after a manuscript has been accepted, thus utilizing the benefits of digital information in the printing process but not in the actual submission and peer-review process.
PMCID: PMC3452756  PMID: 11440253
21.  The society for computer applications in radiology. Has DICOM become a victim of its own success?  
Journal of Digital Imaging  2001;14(3):163-164.
Computers are nearly ubiquitous in academic medicine, and authors create and compile much of their work in the electronic environment, yet the process of manuscript submission often fails to utilize the advantages of electronic communication. The purpose of this report is to review the submission policies of major academic journals in the field of radiology and assess current editorial practices relating to electronic submission of academic works. The authors surveyed 16 radiologic journals that are indexed in the Index Medicus and available in our medical center library. They compared the manuscript submission policies of these journals as outlined in recent issues of the journals and the corresponding worldwide web sites. The authors compared the journals on the following criteria: web site access to instructions; electronic submission of text, both with regard to initial submission and final submission of the approved document; text hardcopy requirements; word processing software restrictions; electronic submission of figures, figure hardcopy requirements; figure file format restrictions; and electronic submission media. Although the trend seems to be toward electronic submission, there currently is no clear-cut standard of practice. Because all of the journals that accept electronic documents also require a hardcopy, many of the advantages gained through electronic submission are nullified. In addition, many publishers only utilize electronic documents after a manuscript has been accepted, thus utilizing the benefits of digital information in the printing process but not in the actual submission and peer-review process.
PMCID: PMC3607474  PMID: 11720339
22.  How to Write Articles that Get Published 
Publications are essential for sharing knowledge, and career advancement. Writing a research paper is a challenge. Most graduate programmes in medicine do not offer hands-on training in writing and publishing in scientific journals. Beginners find the art and science of scientific writing a daunting task. ‘How to write a scientific paper?, Is there a sure way to successful publication ?’ are the frequently asked questions. This paper aims to answer these questions and guide a beginner through the process of planning, writing, and correction of manuscripts that attract the readers and satisfies the peer reviewers. A well-structured paper in lucid and correct language that is easy to read and edit, and strictly follows the instruction to the authors from the editors finds favour from the readers and avoids outright rejection. Making right choice of journal is a decision critical to acceptance. Perseverance through the peer review process is the road to successful publication.
PMCID: PMC4225960  PMID: 25386508
Medical writing; Publication in biomedical journal; Preparation of manuscript
23.  A comparison of authors publishing in two groups of U.S. medical journals. 
This study compared the editorial peer review experiences of authors who published in two groups of indexed U.S. medical journals. The study tested the hypothesis that after one journal rejects a manuscript an author selects a less well-known journal for submission. Group One journals were defined as those indexed in 1992 MEDLINE that satisfied several additional qualitative measures; Group Two journals were indexed in the 1992 MEDLINE only. Surveys were sent to the first authors of 616 randomly selected articles, and 479 surveys were returned, for a response rate of 78.1%. A total of 20.8% of Group One and 15.7% of Group Two articles previously had been rejected. Group One authors were more likely to select a journal for its prestige and article quality, while Group Two authors were more likely to have been invited to submit the manuscript. More than 60% of both groups felt the peer review had offered constructive suggestions, but that it had changed article conclusions less than 3% of the time. Both groups thought the review process only marginally improved content, organization, or statistical analysis, or clarified conclusions. Between 3% and 15% of all authors received considerable conflicting advice from different reviewers. Authors from both groups differed as to their reasons for journal selection, their connections with the publishing journal, and patterns of resubmission after rejection.
PMCID: PMC226156  PMID: 8883984
24.  The positive impact of a facilitated peer mentoring program on academic skills of women faculty 
BMC Medical Education  2012;12:14.
In academic medicine, women physicians lag behind their male counterparts in advancement and promotion to leadership positions. Lack of mentoring, among other factors, has been reported to contribute to this disparity. Peer mentoring has been reported as a successful alternative to the dyadic mentoring model for women interested in improving their academic productivity. We describe a facilitated peer mentoring program in our institution's department of medicine.
Nineteen women enrolled in the program were divided into 5 groups. Each group had an assigned facilitator. Members of the respective groups met together with their facilitators at regular intervals during the 12 months of the project. A pre- and post-program evaluation consisting of a 25-item self-assessment of academic skills, self-efficacy, and academic career satisfaction was administered to each participant.
At the end of 12 months, a total of 9 manuscripts were submitted to peer-reviewed journals, 6 of which are in press or have been published, and another 2 of which have been invited to be revised and resubmitted. At the end of the program, participants reported an increase in their satisfaction with academic achievement (mean score increase, 2.32 to 3.63; P = 0.0001), improvement in skills necessary to effectively search the medical literature (mean score increase, 3.32 to 4.05; P = 0.0009), an improvement in their ability to write a comprehensive review article (mean score increase, 2.89 to 3.63; P = 0.0017), and an improvement in their ability to critically evaluate the medical literature (mean score increased from 3.11 to 3.89; P = 0.0008).
This facilitated peer mentoring program demonstrated a positive impact on the academic skills and manuscript writing for junior women faculty. This 1-year program required minimal institutional resources, and suggests a need for further study of this and other mentoring programs for women faculty.
PMCID: PMC3325854  PMID: 22439908
25.  Bursaries, writing grants and fellowships: a strategy to develop research capacity in primary health care 
BMC Family Practice  2007;8:19.
General practitioners and other primary health care professionals are often the first point of contact for patients requiring health care. Identifying, understanding and linking current evidence to best practice can be challenging and requires at least a basic understanding of research principles and methodologies. However, not all primary health care professionals are trained in research or have research experience. With the aim of enhancing research skills and developing a research culture in primary health care, University Departments of General Practice and Rural Health have been supported since 2000 by the Australian Government funded 'Primary Health Care Research Evaluation and Development (PHCRED) Strategy'.
A small grant funding scheme to support primary health care practitioners was implemented through the PHCRED program at Flinders University in South Australia between 2002 and 2005. The scheme incorporated academic mentors and three types of funding support: bursaries, writing grants and research fellowships. This article describes outcomes of the funding scheme and contributes to the debate surrounding the effectiveness of funding schemes as a means of building research capacity.
Funding recipients who had completed their research were invited to participate in a semi-structured 40-minute telephone interview. Feedback was sought on acquisition of research skills, publication outcomes, development of research capacity, confidence and interest in research, and perception of research. Data were also collected on demographics, research topics, and time needed to complete planned activities.
The funding scheme supported 24 bursaries, 11 writing grants, and three research fellows. Nearly half (47%) of all grant recipients were allied health professionals, followed by general practitioners (21%). The majority (70%) were novice and early career researchers.
Eighty-nine percent of the grant recipients were interviewed. Capacity, confidence, and level of research skills in ten core areas were generally considered to have improved as a result of the award. More than half (53%) had presented their research and 32% had published or submitted an article in a peer-reviewed journal.
A small grant and mentoring scheme through a University Department can effectively enhance research skills, confidence, output, and interest in research of primary health care practitioners.
PMCID: PMC1854903  PMID: 17408497

Results 1-25 (1224610)