PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (353774)

Clipboard (0)
None

Related Articles

1.  Ghost Authorship in Industry-Initiated Randomised Trials 
PLoS Medicine  2007;4(1):e19.
Background
Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known.
Methods and Findings
We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors.
Conclusions
Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Of 44 industry-initiated trials, there was evidence of ghost authorship in 33, increasing to 40 when a person qualifying for authorship was acknowledged rather than appearing as an author.
Editors' Summary
Background.
Original scientific findings are usually published in the form of a “paper”, whether it is actually distributed on paper, or circulated via the internet, as this one is. Papers are normally prepared by a group of researchers who did the research and are then listed at the top of the article. These authors therefore take responsibility for the integrity of the results and interpretation of them. However, many people are worried that sometimes the author list on the paper does not tell the true story of who was involved. In particular, for clinical research, case histories and previous research has suggested that “ghost authorship” is commonplace. Ghost authors are people who were involved in some way in the research study, or writing the paper, but who have been left off the final author list. This might happen because the study “looks” more credible if the true authors (for example, company employees or freelance medical writers) are not revealed. This practice might hide competing interests that readers should be aware of, and has therefore been condemned by academics, groups of editors, and some pharmaceutical companies.
Why Was This Study Done?
This group of researchers wanted to get an idea of how often ghost authorship happened in medical research done by companies. Previous studies looking into this used surveys, whereby the researchers would write to one author on each of a group of papers to ask whether anyone else had been involved in the work but who was not listed on the paper. These sorts of studies typically underestimate the rate of ghost authorship, because the main author might not want to admit what had been going on. However, the researchers here managed to get access to trial protocols (documents setting out the plans for future research studies), which gave them a way to investigate ghost authorship.
What Did the Researchers Do and Find?
In order to investigate the frequency and type of ghost authorship, these researchers identified every trial which was approved between 1994 and 1995 by the ethics committees of Copenhagen and Frederiksberg in Denmark. Then they winnowed this group down to include only the trials that were sponsored by industry (pharmaceutical companies and others), and only those trials that were finished and published. The protocols for each trial were obtained from the ethics committees and the researchers then matched up each protocol with its corresponding paper. Then, they compared names which appeared in the protocol against names appearing on the eventual paper, either on the author list or acknowledged elsewhere in the paper as being involved. The researchers ended up studying 44 trials. For 31 of these (75% of them) they found some evidence of ghost authorship, in that people were identified as having written the protocol or who had been involved in doing statistical analyses or writing the manuscript, but did not end up listed in the manuscript. If the definition of authorship was made narrower, and “ghost authorship” included people qualifying for authorship who were mentioned in the acknowledgements but not the author list, the researchers' estimate went up to 91%, that is 40 of the 44 trials. For most of the trials with missing authors, the ghost was a statistician (the person who analyzes the trial data).
What Do These Findings Mean?
In this study, the researchers found that ghost authorship was very common in papers published in medical journals (this study covered a broad range of peer-reviewed journals in many medical disciplines). The method used in this paper seems more reliable than using surveys to work out how often ghost authorship happens. The researchers aimed to define authorship using the policies set out by a group called the International Committee of Medical Journal Editors (ICMJE), and the findings here suggest that the ICMJE's standards for authorship are very often ignored. This means that people who read the published paper cannot always accurately judge or trust the information presented within it, and competing interests may be hidden. The researchers here suggest that protocols should be made publicly available so that everyone can see what trials are planned and who is involved in conducting them. The findings also suggest that journals should not only list the authors of each paper but describe what each author has done, so that the published information accurately reflects what has been carried out.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040019.
Read the Perspective by Liz Wager, which discusses these findings in more depth
The International Committee of Medical Journal Editors (ICMJE) is a group of general medical journal editors who have produced general guidelines for biomedical manuscripts; their definition of authorship is also described
The Committee on Publication Ethics is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record; the Web site lists anonymized problems and the committee's advice, not just regarding authorship, but other types of problems as well
Good Publication Practice for Pharmaceutical Companies outlines common standards for publication of industry-sponsored medical research, and some pharmaceutical companies have agreed to these
doi:10.1371/journal.pmed.0040019
PMCID: PMC1769411  PMID: 17227134
2.  Electronic Submission of Academic Works: A Survey of Current Editorial Practices of Radiologic Journals  
Journal of Digital Imaging  2001;14(2):107-110.
Computers are nearly ubiquitous in academic medicine, and authors create and compile much of their work in the electronic environment, yet the process of manuscript submission often fails to utilize the advantages of electronic communication. The purpose of this report is to review the submission policies of major academic journals in the field of radiology and assess current editorial practices relating to electronic submission of academic works. The authors surveyed 16 radiologic journals that are indexed in the Index Medicus and available in our medical center library. They compared the manuscript submission policies of these journals as outlined in recent issues of the journals and the corresponding worldwide web sites. The authors compared the journals on the following criteria: web site access to instructions; electronic submission of text, both with regard to initial submission and final submission of the approved document; text hardcopy requirements; word processing software restrictions; electronic submission of figures, figure hardcopy requirements; figure file format restrictions; and electronic submission media. Although the trend seems to be toward electronic submission, there currently is no clear-cut standard of practice. Because all of the journals that accept electronic documents also require a hardcopy, many of the advantages gained through electronic submission are nullified. In addition, many publishers only utilize electronic documents after a manuscript has been accepted, thus utilizing the benefits of digital information in the printing process but not in the actual submission and peer-review process.
doi:10.1007/s10278-001-0008-x
PMCID: PMC3452756  PMID: 11440253
3.  News at Biochemia Medica: Research integrity corner, updated Guidelines to authors, revised Author statement form and adopted ICMJE Conflict-of-interest form 
Biochemia Medica  2013;23(1):5-6.
From the issue 23(1) we have implemented several major changes in the editorial policies and procedures. We hope that those changes will raise awareness of our potential authors and reviewers for research and publication integrity issues as well as to improve the quality of our submissions and published articles. Among those changes is the launch of a special journal section called Research Integrity Corner. In this section we aim to publish educational articles dealing with different research and publication misconduct issues. Moreover, we have done a comprehensive revision of our Instructions to authors. Whereas our former Instructions to authors have mostly been concerned with recommendations for manuscript preparation and submission, the revised document additionally describes the editorial procedure for all submitted articles and provides exact journal policies towards research integrity, authorship, copyright and conflict of interest. By putting these Guidelines into action, we hope that our main ethical policies and requirements are now visible and available to all our potential authors. We have also revised the former Authorship and copyright form which is now called the Author statement form. This form now contains statements on the authorship, originality of work, research ethics, patient privacy and confidentiality, and copyright transfer. Finally, Journal has adopted the ICMJE Form for Disclosure of Potential Conflicts of Interest. From this issue, for each submitted article, authors are requested to fill out the “ICMJE Form for Disclosure of Potential Conflicts of Interest” as well as the Author statement form and upload those forms during the online manuscript submission process. We honestly believe that our authors and readers will appreciate such endeavors. In this Editorial article we briefly explain the background and the nature of those recent major editorial changes.
doi:10.11613/BM.2013.001
PMCID: PMC3900092  PMID: 23457759
scientific misconduct; plagiarism; editorial policies
4.  A retrospective analysis of submissions, acceptance rate, open peer review operations, and prepublication bias of the multidisciplinary open access journal Head & Face Medicine 
Head & Face Medicine  2007;3:27.
Background
Head & Face Medicine (HFM) was launched in August 2005 to provide multidisciplinary science in the field of head and face disorders with an open access and open peer review publication platform. The objective of this study is to evaluate the characteristics of submissions, the effectiveness of open peer reviewing, and factors biasing the acceptance or rejection of submitted manuscripts.
Methods
A 1-year period of submissions and all concomitant journal operations were retrospectively analyzed. The analysis included submission rate, reviewer rate, acceptance rate, article type, and differences in duration for peer reviewing, final decision, publishing, and PubMed inclusion. Statistical analysis included Mann-Whitney U test, Chi-square test, regression analysis, and binary logistic regression.
Results
HFM received 126 articles (10.5 articles/month) for consideration in the first year. Submissions have been increasing, but not significantly over time. Peer reviewing was completed for 82 articles and resulted in an acceptance rate of 48.8%. In total, 431 peer reviewers were invited (5.3/manuscript), of which 40.4% agreed to review. The mean peer review time was 37.8 days. The mean time between submission and acceptance (including time for revision) was 95.9 days. Accepted papers were published on average 99.3 days after submission. The mean time between manuscript submission and PubMed inclusion was 101.3 days. The main article types submitted to HFM were original research, reviews, and case reports. The article type had no influence on rejection or acceptance. The variable 'number of invited reviewers' was the only significant (p < 0.05) predictor for rejection of manuscripts.
Conclusion
The positive trend in submissions confirms the need for publication platforms for multidisciplinary science. HFM's peer review time comes in shorter than the 6-weeks turnaround time the Editors set themselves as the maximum. Rejection of manuscripts was associated with the number of invited reviewers. None of the other parameters tested had any effect on the final decision. Thus, HFM's ethical policy, which is based on Open Access, Open Peer, and transparency of journal operations, is free of 'editorial bias' in accepting manuscripts.
Original data
Provided as a downloadable tab-delimited text file (URL and variable code available under section 'additional files').
doi:10.1186/1746-160X-3-27
PMCID: PMC1913501  PMID: 17562003
5.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
6.  Comments on the process and product of the health impacts assessment component of the national assessment of the potential consequences of climate variability and change for the United States. 
Environmental Health Perspectives  2001;109(Suppl 2):177-184.
In 1990 Congress formed the U.S. Global Change Research Program and required it to conduct a periodic national assessment of the potential impacts of climate variability and change on all regions and select economic/resource sectors of the United States. Between 1998 and 2000, a team of experts collaborated on a health impacts assessment that formed the basis for the first National Assessment's analysis of the potential impacts of climate on human health. The health impacts assessment was integrated across a number of health disciplines and involved a search for and qualitative expert judgment review of data on the potential links between climate events and population health. Accomplishments included identification of vulnerable populations, adaptation strategies, research needs, and data gaps. Experts, stakeholders, and the public were involved. The assessment is reported in five articles in this issue; a summary was published in the April 2000 issue of Environmental Health Perspectives. The assessment report will enhance understanding of ways human health might be affected by various climate-associated stresses and of the need for further empirical and predictive research. Improved understanding and communication of the significance and inevitability of uncertainties in such an assessment are critical to further research and policy development.
PMCID: PMC1240664  PMID: 11359684
7.  In Search of Integrated Specificity: Comment on Denson, Spanovic, and Miller (2009) 
Psychological bulletin  2009;135(6):854-856.
Psychologists have long been interested in the integrated specificity hypothesis, which maintains that stressors elicit fairly distinct behavioral, emotional, and biological responses, molded by selective pressures to meet specific demands from the environment. This issue of Psychological Bulletin features a meta-analytic review of the evidence for this proposition by Denson, Spanovic, and Miller (2009). It concludes that the meta-analytic findings support the “core concept behind the integrated specificity model (p. XX)” and reveal that “within the context of a stressful event, organisms produce an integrated and coordinated response at multiple levels (i.e., cognitive, emotional, physiological; p. XX).” In this commentary I argue that conclusions like this are unwarranted given the data. Aside from some effects for cortisol, in fact, there was little evidence of specificity, and most of the significant findings reported would be expected by chance alone. I also contend that Denson et al. fail to consider some important sources of evidence bearing on the specificity hypothesis, particularly how appraisals and emotions couple with autonomic nervous system endpoints and functional indices of immune response. If selective pressures did give rise to an integrated stress response, such pathways almost certainly would have been involved. By omitting such outcomes from the meta-analysis, the authors have overlooked what are probably the most definitive tests of the specificity hypothesis. As a result, the field is back where it started: with a lot of affection for the concept of integrated specificity, but little in the way of definitive evidence to refute or accept it.
doi:10.1037/a0017440
PMCID: PMC2774222  PMID: 19883138
8.  Neurologic adverse events associated with smallpox vaccination in the United States – response and comment on reporting of headaches as adverse events after smallpox vaccination among military and civilian personnel 
BMC Medicine  2006;4:27.
Background
Accurate reporting of adverse events occurring after vaccination is an important component of determining risk-benefit ratios for vaccinations. Controversy has developed over alleged underreporting of adverse events within U.S. military samples. This report examines the accuracy of adverse event rates recently published for headaches, and examines the issue of underreporting of headaches as a function of civilian or military sources and as a function of passive versus active surveillance.
Methods
A report by Sejvar et al was examined closely for accuracy with respect to the reporting of neurologic adverse events associated with smallpox vaccination in the United States. Rates for headaches were reported by several scholarly sources, in addition to Sejvar et al, permitting a comparison of reporting rates as a function of source and type of surveillance.
Results
Several major errors or omissions were identified in Sejvar et al. The count of civilian subjects vaccinated and the totals of both civilians and military personnel vaccinated were reported incorrectly by Sejvar et al. Counts of headaches reported in VAERS were lower (n = 95) for Sejvar et al than for Casey et al (n = 111) even though the former allegedly used 665,000 subjects while the latter used fewer than 40,000 subjects, with both using approximately the same civilian sources. Consequently, rates of nearly 20 neurologic adverse events reported by Sejvar et al were also incorrectly calculated. Underreporting of headaches after smallpox vaccination appears to increase for military samples and for passive adverse event reporting systems.
Conclusion
Until revised or corrected, the rates of neurologic adverse events after smallpox vaccinated reported by Sejvar et al must be deemed invalid. The concept of determining overall rates of adverse events by combining small civilian samples with large military samples appears to be invalid. Reports of headaches as adverse events after smallpox vaccination appear to be have occurred much less frequently using passive surveillance systems and by members of the U.S. military compared to civilians, especially those employed in healthcare occupations. Such concerns impact risk-benefit ratios associated with vaccines and weigh against making vaccinations mandatory, without informed consent, even among military members. Because of the issues raised here, adverse event rates derived solely or primarily from U.S. Department of Defense reporting systems, especially passive surveillance systems, should not be used, given better alternatives, for making public health policy decisions.
doi:10.1186/1741-7015-4-27
PMCID: PMC1647285  PMID: 17096855
9.  A comment to the paper by Waltman et al., Scientometrics, 87, 467–481, 2011 
Scientometrics  2011;88(3):1011-1016.
In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old “crown” indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, 2005). Given the high correlations between the old and new “crown” indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones.
doi:10.1007/s11192-011-0424-8
PMCID: PMC3153660  PMID: 21949453
Citation; Indicator; h-index; Quality; Excellence; Selection
11.  The future of JABA: A comment 
doi:10.1901/jaba.1987.20-329
PMCID: PMC1286072  PMID: 16795704
12.  "Any other comments?" Open questions on questionnaires – a bane or a bonus to research? 
Background
The habitual "any other comments" general open question at the end of structured questionnaires has the potential to increase response rates, elaborate responses to closed questions, and allow respondents to identify new issues not captured in the closed questions. However, we believe that many researchers have collected such data and failed to analyse or present it.
Discussion
General open questions at the end of structured questionnaires can present a problem because of their uncomfortable status of being strictly neither qualitative nor quantitative data, the consequent lack of clarity around how to analyse and report them, and the time and expertise needed to do so. We suggest that the value of these questions can be optimised if researchers start with a clear understanding of the type of data they wish to generate from such a question, and employ an appropriate strategy when designing the study. The intention can be to generate depth data or 'stories' from purposively defined groups of respondents for qualitative analysis, or to produce quantifiable data, representative of the population sampled, as a 'safety net' to identify issues which might complement the closed questions.
Summary
We encourage researchers to consider developing a more strategic use of general open questions at the end of structured questionnaires. This may optimise the quality of the data and the analysis, reduce dilemmas regarding whether and how to analyse such data, and result in a more ethical approach to making best use of the data which respondents kindly provide.
doi:10.1186/1471-2288-4-25
PMCID: PMC533875  PMID: 15533249
13.  Common Errors in Manuscripts Submitted to Medical Science Journals 
Background:
Many manuscripts submitted to biomedical journals are rejected for reasons that include low-quality of the manuscripts.
Aim:
The aim of this study is to identify and characterize the common errors in manuscripts submitted to medical journals based in Africa and Asia.
Materials and Methods:
Reviewers’ reports on 42 manuscripts were analyzed qualitatively using deductive coding, and quantitatively to determine the errors by sections of the manuscripts. The study included only reviews on full length original research articles.
Results:
Results showed that 66.7% (28/42) of the manuscripts had flaws in the introduction, 85.7% (36/42) in materials and methods, 66.7% (28/42) in the results, 71.4% (30/42) in discussion, 69.0% (29/42) in references, and 81.1% (34/42) in the general sections. Qualitative analysis of the reviews revealed 22 themes. Most common flaws identified were improper review of literature, provision of insufficient detailed methodology, unsystematic or illogical presentation of results, and unsupported conclusions. Others were inconsistent or nonconforming citations, and lack of good grammatical writing.
Conclusions:
The results show that many of the manuscripts had remarkable errors and demonstrate the need for attention to detail in study design and manuscript preparation and for further training of medical scientists in the techniques of manuscript writing for journal publication.
doi:10.4103/2141-9248.117957
PMCID: PMC3793443  PMID: 24116317
Academic writing; Africa; Asia; Manuscript preparation; Peer review
14.  Editorial 
The Editorial office of GSE and EDP Sciences publishing company are pleased to inform you that since October 10, 2005, submission and management of GSE manuscripts are being administered with the help of the Manuscript Management System or MMS software. MMS is a specialised database, which provides information on the manuscripts submitted for publication to GSE. This system permits a simple, reliable and efficient management of the manuscripts during the whole process of reviewing, editing and publication.
Authors are kindly requested to register on the MMS database and to follow instructions for the electronic submission of their manuscripts. After completion of the registration, authors can follow the status of their manuscript directly through MMS.
MMS is accessible at:
doi:10.1186/1297-9686-37-7-585
PMCID: PMC2697238
15.  Less Work, Less Respect: Authors' Perceived Importance of Research Contributions and Their Declared Contributions to Research Articles 
PLoS ONE  2011;6(6):e20206.
Background
Attitudes towards authorship are connected with authors' research experience and with knowledge of authorship criteria of International Committee of Medical Journal Editors (ICMJE). The objective of this study was to assess association between authors' perceived importance of contributions for authorship qualification and their participation in manuscripts submitted to a journal.
Methods
Authors (n = 1181) of 265 manuscripts submitted to the Croatian Medical Journal were asked to identify and rate their contribution in the preparation of the submitted manuscript (0 – none to 4 – full for 11 listed contributions) and the importance of these contributions as authorship qualifications (0 – none to 4 – full). They were randomly allocated into 3 groups: the first (n = 90 manuscripts, n = 404 authors) first received the contribution disclosure form and then contribution importance-rating questionnaire; the second (n = 88 manuscripts, n = 382 authors) first received the rating questionnaire and then the contribution disclosure form, and the third group (n = 87 manuscripts, n = 395 authors) received both questionnaires at the same time. We compared authors' perception of importance of contribution categories.
Results
1014 (85.9%) authors of 235 manuscripts responded. Authors who declared contribution to a specific category rated it as more important for authorship than those authors who did not contribute to the same category (P>0.005 for all contribution categories, Mann-Withney test). Authors qualifying for ICMJE authorship rated all contribution categories higher than non-qualifying authors. For all contributions, associations between perceived importance of contribution and actual author's contribution were statistically significant.
Conclusions
Authorship seems to be not a normative issue subjective to categorization into criteria, but also a very personal view of the importance and value of one's contributions.
doi:10.1371/journal.pone.0020206
PMCID: PMC3119662  PMID: 21713036
16.  Predictive Validity Evidence for Medical Education Research Study Quality Instrument Scores: Quality of Submissions to JGIM’s Medical Education Special Issue 
Background
Deficiencies in medical education research quality are widely acknowledged. Content, internal structure, and criterion validity evidence support the use of the Medical Education Research Study Quality Instrument (MERSQI) to measure education research quality, but predictive validity evidence has not been explored.
Objective
To describe the quality of manuscripts submitted to the 2008 Journal of General Internal Medicine (JGIM) medical education issue and determine whether MERSQI scores predict editorial decisions.
Design and Participants
Cross-sectional study of original, quantitative research studies submitted for publication.
Measurements
Study quality measured by MERSQI scores (possible range 5–18).
Results
Of 131 submitted manuscripts, 100 met inclusion criteria. The mean (SD) total MERSQI score was 9.6 (2.6), range 5–15.5. Most studies used single-group cross-sectional (54%) or pre-post designs (32%), were conducted at one institution (78%), and reported satisfaction or opinion outcomes (56%). Few (36%) reported validity evidence for evaluation instruments. A one-point increase in MERSQI score was associated with editorial decisions to send manuscripts for peer review versus reject without review (OR 1.31, 95%CI 1.07–1.61, p = 0.009) and to invite revisions after review versus reject after review (OR 1.29, 95%CI 1.05–1.58, p = 0.02). MERSQI scores predicted final acceptance versus rejection (OR 1.32; 95% CI 1.10–1.58, p = 0.003). The mean total MERSQI score of accepted manuscripts was significantly higher than rejected manuscripts (10.7 [2.5] versus 9.0 [2.4], p = 0.003).
Conclusions
MERSQI scores predicted editorial decisions and identified areas of methodological strengths and weaknesses in submitted manuscripts. Researchers, reviewers, and editors might use this instrument as a measure of methodological quality.
Electronic supplementary material
The online version of this article (doi:10.1007/s11606-008-0664-3) contains supplementary material, which is available to authorized users.
doi:10.1007/s11606-008-0664-3
PMCID: PMC2517948  PMID: 18612715
medical education research; research quality; research methods
17.  Do Editorial Policies Support Ethical Research? A Thematic Text Analysis of Author Instructions in Psychiatry Journals 
PLoS ONE  2014;9(6):e97492.
Introduction
According to the Declaration of Helsinki and other guidelines, clinical studies should be approved by a research ethics committee and seek valid informed consent from the participants. Editors of medical journals are encouraged by the ICMJE and COPE to include requirements for these principles in the journal’s instructions for authors. This study assessed the editorial policies of psychiatry journals regarding ethics review and informed consent.
Methods and Findings
The information given on ethics review and informed consent and the mentioning of the ICMJE and COPE recommendations were assessed within author’s instructions and online submission procedures of all 123 eligible psychiatry journals. While 54% and 58% of editorial policies required ethics review and informed consent, only 14% and 19% demanded the reporting of these issues in the manuscript. The TOP-10 psychiatry journals (ranked by impact factor) performed similarly in this regard.
Conclusions
Only every second psychiatry journal adheres to the ICMJE’s recommendation to inform authors about requirements for informed consent and ethics review. Furthermore, we argue that even the ICMJE’s recommendations in this regard are insufficient, at least for ethically challenging clinical trials. At the same time, ideal scientific design sometimes even needs to be compromised for ethical reasons. We suggest that features of clinical studies that make them morally controversial, but not necessarily unethical, are analogous to methodological limitations and should thus be reported explicitly. Editorial policies as well as reporting guidelines such as CONSORT should be extended to support a meaningful reporting of ethical research.
doi:10.1371/journal.pone.0097492
PMCID: PMC4046953  PMID: 24901366
18.  Why do you think you should be the author on this manuscript? Analysis of open-ended responses of authors in a general medical journal 
Background
To assess how authors would describe their contribution to the submitted manuscript without reference to or requirement to satisfy authorship criteria of the International Committee of Medical Journal Editors (ICMJE), we analyzed responses of authors to an open-ended question “Why do you think you should be the author on this manuscript?”.
Methods
Responses of authors (n=1425) who submitted their manuscripts (n=345) to the Croatian Medical Journal, an international general medical journal, from March 2009 until July 2010 were transcribed and matched to ICMJE criteria. Statements that could not be matched were separately categorized. Responses according to the number of authors or their byline position on the manuscript were analyzed using Mann–Whitney U test and Moses test of extreme reactions.
Results
The number of authors per manuscript ranged from 1 to 26 (median=4, IQR=3-6), with the median of 2 contributions per author (IQR=2-3). Authors’ responses could be matched to the ICMJE criteria in 1116 (87.0%) cases. Among these, only 15.6% clearly declared contributions from all 3 ICMJE criteria; however, if signing of the authorship form was taken as the fulfillment of the third ICMJE criterion, overall fraction of deserving authorship was 54.2%. Non-ICMJE contributions were declared by 98 (7.6%) authors whose other contributions could be matched to ICMJE criteria, and by 116 (13.0%) authors whose contributions could not be matched to ICMJE criteria. The most frequently reported non-ICMJE contribution was literature review. Authors on manuscripts with more than 8 authors declared more contributions than those on manuscript with 8 or fewer authors: median 2, IQR 1–4, vs. median 2, IQR 1–3, respectively (Mann Whitney U test, p=0.001; Moses Test of Extreme Reactions, p<0.001). Almost a third of single authors (n=9; 31.0%) reported contributions that could not be matched to any ICMJE criterion.
Conclusions
In cases of multi-author collaborative efforts but not in manuscripts with fewer authors open-ended authorship declaration without instructions on ICMJE criteria elicited responses from authors that were similar to responses when ICMJE criteria were explicitly required. Current authorship criteria and the practice of contribution declaration should be revised in order to capture deserving authorship in biomedical research.
doi:10.1186/1471-2288-12-189
PMCID: PMC3552823  PMID: 23256648
Authorship; Guideline adherence; Contribution disclosure form; International Committee of Medical Journal Editors (ICMJE); Editorial policies; Croatia
20.  Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care? 
PLoS ONE  2010;5(4):e10072.
Background
Editorial peer review is universally used but little studied. We examined the relationship between external reviewers' recommendations and the editorial outcome of manuscripts undergoing external peer-review at the Journal of General Internal Medicine (JGIM).
Methodology/Principal Findings
We examined reviewer recommendations and editors' decisions at JGIM between 2004 and 2008. For manuscripts undergoing peer review, we calculated chance-corrected agreement among reviewers on recommendations to reject versus accept or revise. Using mixed effects logistic regression models, we estimated intra-class correlation coefficients (ICC) at the reviewer and manuscript level. Finally, we examined the probability of rejection in relation to reviewer agreement and disagreement. The 2264 manuscripts sent for external review during the study period received 5881 reviews provided by 2916 reviewers; 28% of reviews recommended rejection. Chance corrected agreement (kappa statistic) on rejection among reviewers was 0.11 (p<.01). In mixed effects models adjusting for study year and manuscript type, the reviewer-level ICC was 0.23 (95% confidence interval [CI], 0.19–0.29) and the manuscript-level ICC was 0.17 (95% CI, 0.12–0.22). The editors' overall rejection rate was 48%: 88% when all reviewers for a manuscript agreed on rejection (7% of manuscripts) and 20% when all reviewers agreed that the manuscript should not be rejected (48% of manuscripts) (p<0.01).
Conclusions/Significance
Reviewers at JGIM agreed on recommendations to reject vs. accept/revise at levels barely beyond chance, yet editors placed considerable weight on reviewers' recommendations. Efforts are needed to improve the reliability of the peer-review process while helping editors understand the limitations of reviewers' recommendations.
doi:10.1371/journal.pone.0010072
PMCID: PMC2851650  PMID: 20386704
21.  Introduction to European comments on “Medullary Thyroid Cancer: management guidelines of the American Thyroid Association” 
Thyroid Research  2013;6(Suppl 1):S1.
Guest Editors of Thyroid Research supplement devoted to medullary thyroid cancer present the history on how the discussion about “Medullary Thyroid Cancer: management guidelines of the American Thyroid Association” was initiated and subsequently widely commented before and during European Thyroid Association – Cancer Research Network Meeting in Lisbon. It is explained why it has been decided to publish the manuscripts within the supplement – to document voices from the discussion and popularize them.
doi:10.1186/1756-6614-6-S1-S1
PMCID: PMC3599712  PMID: 23514266
22.  Determinants of physical activity and exercise in healthy older adults: A systematic review 
Background
The health benefits of regular physical activity and exercise have been widely acknowledged. Unfortunately, a decline in physical activity is observed in older adults. Knowledge of the determinants of physical activity (unstructured activity incorporated in daily life) and exercise (structured, planned and repetitive activities) is needed to effectively promote an active lifestyle. Our aim was to systematically review determinants of physical activity and exercise participation among healthy older adults, considering the methodological quality of the included studies.
Methods
Literature searches were conducted in PubMed/Medline and PsycINFO/OVID for peer reviewed manuscripts published in English from 1990 onwards. We included manuscripts that met the following criteria: 1) population: community dwelling healthy older adults, aged 55 and over; 2) reporting determinants of physical activity or exercise. The outcome measure was qualified as physical activity, exercise, or combination of the two, measured objectively or using self-report. The methodological quality of the selected studies was examined and a best evidence synthesis was applied to assess the association of the determinants with physical activity or exercise.
Results
Thirty-four manuscripts reporting on 30 studies met the inclusion criteria, of which two were of high methodological quality. Physical activity was reported in four manuscripts, exercise was reported in sixteen and a combination of the two was reported in fourteen manuscripts. Three manuscripts used objective measures, twenty-two manuscripts used self-report measures and nine manuscripts combined a self-report measure with an objective measure. Due to lack of high quality studies and often only one manuscript reporting on a particular determinant, we concluded "insufficient evidence" for most associations between determinants and physical activity or exercise.
Conclusions
Because physical activity was reported in four manuscripts only, the determinants of physical activity particularly need further study. Recommendations for future research include the use of objective measures of physical activity or exercise as well as valid and reliable measures of determinants.
doi:10.1186/1479-5868-8-142
PMCID: PMC3320564  PMID: 22204444
Prevention; Behaviour change; Determinants; Active lifestyle; Aged
23.  Editorial Comments, 1974-1986: The Case For and Against the Use of Computer-Assisted Decision Making 
Journal editorials are an important medium for communicating information about medical innovations. Evaluative statements contained in editorials pertain to the innovation's technical merits, as well as its probable economic, social and political, and ethical consequences. This information will either promote or impede the subsequent diffusion of innovations. This paper analyzes the evaluative information contained in thirty editorials that pertain to the topic of computer-assisted decision making (CDM). Most editorials agree that CDM technology is effective and economical in performing routine clinical tasks; controversy surrounds the use of more sophisticated CDM systems for complex problem solving. A few editorials argue that the innovation should play an integral role in transforming the established health care system. Most, however, maintain that it can or should be accommodated within the existing health care framework. Finally, while few editorials discuss the ethical ramifications of CDM technology, those that do suggest that it will contribute to more humane health care. The editorial analysis suggests that CDM technology aimed at routine clinical task will experience rapid diffusion. In contrast, the diffusion of more sophisticated CDM systems will, in the foreseeable future, likely be sporadic at best.
PMCID: PMC2245139
24.  Biochemia Medica has started using the CrossCheck plagiarism detection software powered by iThenticate 
Biochemia Medica  2013;23(2):139-140.
In February 2013, Biochemia Medica has joined CrossRef, which enabled us to implement CrossCheck plagiarism detection service. Therefore, all manuscript submitted to Biochemia Medica are now first assigned to Research integrity editor (RIE), before sending the manuscript for peer-review. RIE submits the text to CrossCheck analysis and is responsible for reviewing the results of the text similarity analysis. Based on the CrossCheck analysis results, RIE subsequently provides a recommendation to the Editor-in-chief (EIC) on whether the manuscript should be forwarded to peer-review, corrected for suspected parts prior to peer-review or immediately rejected. Final decision on the manuscript is, however, with the EIC. We hope that our new policy and manuscript processing algorithm will help us to further increase the overall quality of our Journal.
doi:10.11613/BM.2013.016
PMCID: PMC3900059  PMID: 23894858
plagiarism; editorial policy; scientific misconduct
25.  The society for computer applications in radiology. Has DICOM become a victim of its own success?  
Journal of Digital Imaging  2001;14(3):163-164.
Computers are nearly ubiquitous in academic medicine, and authors create and compile much of their work in the electronic environment, yet the process of manuscript submission often fails to utilize the advantages of electronic communication. The purpose of this report is to review the submission policies of major academic journals in the field of radiology and assess current editorial practices relating to electronic submission of academic works. The authors surveyed 16 radiologic journals that are indexed in the Index Medicus and available in our medical center library. They compared the manuscript submission policies of these journals as outlined in recent issues of the journals and the corresponding worldwide web sites. The authors compared the journals on the following criteria: web site access to instructions; electronic submission of text, both with regard to initial submission and final submission of the approved document; text hardcopy requirements; word processing software restrictions; electronic submission of figures, figure hardcopy requirements; figure file format restrictions; and electronic submission media. Although the trend seems to be toward electronic submission, there currently is no clear-cut standard of practice. Because all of the journals that accept electronic documents also require a hardcopy, many of the advantages gained through electronic submission are nullified. In addition, many publishers only utilize electronic documents after a manuscript has been accepted, thus utilizing the benefits of digital information in the printing process but not in the actual submission and peer-review process.
doi:10.1007/s10278-001-0016-x
PMCID: PMC3607474  PMID: 11720339

Results 1-25 (353774)