PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (869825)

Clipboard (0)
None

Related Articles

1.  Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin 
PLoS Medicine  2013;10(1):e1001378.
Using documents obtained through litigation, S. Swaroop Vedula and colleagues compared internal company documents regarding industry-sponsored trials of off-label uses of gabapentin with the published trial reports and find discrepancies in reporting of analyses.
Background
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Conclusions
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
To be credible, published research must present an unbiased, transparent, and accurate description of the study methods and findings so that readers can assess all relevant information to make informed decisions about the impact of any conclusions. Therefore, research publications should conform to universally adopted guidelines and checklists. Studies to establish whether a treatment is effective, termed randomized controlled trials (RCTs), are checked against a comprehensive set of guidelines: The robustness of trial protocols are measured through the Standard Protocol Items for Randomized Trials (SPIRIT), and the Consolidated Standards of Reporting Trials (CONSORT) statement (which was constructed and agreed by a meeting of journal editors in 1996, and has been updated over the years) includes a 25-point checklist that covers all of the key points in reporting RCTs.
Why Was This Study Done?
Although the CONSORT statement has helped improve transparency in the reporting of the methods and findings from RCTs, the statement does not define how certain types of analyses should be conducted and which patients should be included in the analyses, for example, in an intention-to-treat analysis (in which all participants are included in the data analysis of the group to which they were assigned, whether or not they completed the intervention given to the group). So in this study, the researchers used internal company documents released in the course of litigation against the pharmaceutical company Pfizer regarding the drug gabapentin, to compare between the internal and published reports the reporting of the numbers of participants, the description of the types of analyses, and the definitions of each type of analysis. The reports involved studies of gabapentin used for medical reasons not approved for marketing by the US Food and Drug Administration, known as “off-label” uses.
What Did the Researchers Do and Find?
The researchers identified trials sponsored by Pfizer relating to four off-label uses of gabapentin and examined the internal company protocols, statistical analysis plans, research reports, and the main publications related to each trial. The researchers then compared the numbers of participants randomized and analyzed for the main (primary) outcome and the type of analysis for efficacy and safety in both the internal research report and the trial publication. The researchers identified 21 trials, 11 of which were published RCTs that had the associated documents necessary for comparison.
The researchers found that in three out of ten trials there were differences in the internal research report and the main publication regarding the number of randomized participants. Furthermore, in six out of ten trials, the researchers were unable to compare the internal research report with the main publication for the number of participants analyzed for efficacy, because the research report either did not describe the primary outcome or did not describe the type of analysis. Overall, the researchers found that seven different types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including intention-to-treat analysis. However, the protocol or publication used six different descriptions for the intention-to-treat analysis, resulting in several important differences between the internal and published documents about the number of patients included in the analysis.
What Do These Findings Mean?
These findings from a sample of industry-sponsored trials on the off-label use of gabapentin suggest that when compared to the internal research reports, the trial publications did not always accurately reflect what was actually done in the trial. Therefore, the trial publication could not be considered to be an accurate and transparent record of the numbers of participants randomized and analyzed for efficacy. These findings support the need for further revisions of the CONSORT statement, such as including explicit statements about the criteria used to define each type of analysis and the numbers of participants excluded from each type of analysis. Further guidance is also needed to ensure consistent terminology for types of analysis. Of course, these revisions will improve reporting only if authors and journals adhere to them. These findings also highlight the need for all individual patient data to be made accessible to readers of the published article.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001378.
For more information, see the CONSORT statement website
The EQUATOR Network website is a resource center for the good reporting of health research studies and has more information about the SPIRIT initiative and the CONSORT statement
doi:10.1371/journal.pmed.1001378
PMCID: PMC3558476  PMID: 23382656
2.  Ghost Authorship in Industry-Initiated Randomised Trials 
PLoS Medicine  2007;4(1):e19.
Background
Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known.
Methods and Findings
We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors.
Conclusions
Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Of 44 industry-initiated trials, there was evidence of ghost authorship in 33, increasing to 40 when a person qualifying for authorship was acknowledged rather than appearing as an author.
Editors' Summary
Background.
Original scientific findings are usually published in the form of a “paper”, whether it is actually distributed on paper, or circulated via the internet, as this one is. Papers are normally prepared by a group of researchers who did the research and are then listed at the top of the article. These authors therefore take responsibility for the integrity of the results and interpretation of them. However, many people are worried that sometimes the author list on the paper does not tell the true story of who was involved. In particular, for clinical research, case histories and previous research has suggested that “ghost authorship” is commonplace. Ghost authors are people who were involved in some way in the research study, or writing the paper, but who have been left off the final author list. This might happen because the study “looks” more credible if the true authors (for example, company employees or freelance medical writers) are not revealed. This practice might hide competing interests that readers should be aware of, and has therefore been condemned by academics, groups of editors, and some pharmaceutical companies.
Why Was This Study Done?
This group of researchers wanted to get an idea of how often ghost authorship happened in medical research done by companies. Previous studies looking into this used surveys, whereby the researchers would write to one author on each of a group of papers to ask whether anyone else had been involved in the work but who was not listed on the paper. These sorts of studies typically underestimate the rate of ghost authorship, because the main author might not want to admit what had been going on. However, the researchers here managed to get access to trial protocols (documents setting out the plans for future research studies), which gave them a way to investigate ghost authorship.
What Did the Researchers Do and Find?
In order to investigate the frequency and type of ghost authorship, these researchers identified every trial which was approved between 1994 and 1995 by the ethics committees of Copenhagen and Frederiksberg in Denmark. Then they winnowed this group down to include only the trials that were sponsored by industry (pharmaceutical companies and others), and only those trials that were finished and published. The protocols for each trial were obtained from the ethics committees and the researchers then matched up each protocol with its corresponding paper. Then, they compared names which appeared in the protocol against names appearing on the eventual paper, either on the author list or acknowledged elsewhere in the paper as being involved. The researchers ended up studying 44 trials. For 31 of these (75% of them) they found some evidence of ghost authorship, in that people were identified as having written the protocol or who had been involved in doing statistical analyses or writing the manuscript, but did not end up listed in the manuscript. If the definition of authorship was made narrower, and “ghost authorship” included people qualifying for authorship who were mentioned in the acknowledgements but not the author list, the researchers' estimate went up to 91%, that is 40 of the 44 trials. For most of the trials with missing authors, the ghost was a statistician (the person who analyzes the trial data).
What Do These Findings Mean?
In this study, the researchers found that ghost authorship was very common in papers published in medical journals (this study covered a broad range of peer-reviewed journals in many medical disciplines). The method used in this paper seems more reliable than using surveys to work out how often ghost authorship happens. The researchers aimed to define authorship using the policies set out by a group called the International Committee of Medical Journal Editors (ICMJE), and the findings here suggest that the ICMJE's standards for authorship are very often ignored. This means that people who read the published paper cannot always accurately judge or trust the information presented within it, and competing interests may be hidden. The researchers here suggest that protocols should be made publicly available so that everyone can see what trials are planned and who is involved in conducting them. The findings also suggest that journals should not only list the authors of each paper but describe what each author has done, so that the published information accurately reflects what has been carried out.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040019.
Read the Perspective by Liz Wager, which discusses these findings in more depth
The International Committee of Medical Journal Editors (ICMJE) is a group of general medical journal editors who have produced general guidelines for biomedical manuscripts; their definition of authorship is also described
The Committee on Publication Ethics is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record; the Web site lists anonymized problems and the committee's advice, not just regarding authorship, but other types of problems as well
Good Publication Practice for Pharmaceutical Companies outlines common standards for publication of industry-sponsored medical research, and some pharmaceutical companies have agreed to these
doi:10.1371/journal.pmed.0040019
PMCID: PMC1769411  PMID: 17227134
3.  Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices 
PLoS Medicine  2011;8(8):e1001069.
Carol Bennett and colleagues review the evidence and find that there is limited guidance and no consensus on the optimal reporting of survey research.
Background
Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.
Methods and Findings
We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).
Conclusions
There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.
Why Was This Study Done?
This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.
What Did the Researchers Do and Find?
The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.
The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.
What Do These Findings Mean?
Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069.
More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
More information about STROBE is available on the STROBE Statement web site
doi:10.1371/journal.pmed.1001069
PMCID: PMC3149080  PMID: 21829330
4.  Does journal endorsement of reporting guidelines influence the completeness of reporting of health research? A systematic review protocol 
Systematic Reviews  2012;1:24.
Background
Reporting of health research is often inadequate and incomplete. Complete and transparent reporting is imperative to enable readers to assess the validity of research findings for use in healthcare and policy decision-making. To this end, many guidelines, aimed at improving the quality of health research reports, have been developed for reporting a variety of research types. Despite efforts, many reporting guidelines are underused. In order to increase their uptake, evidence of their effectiveness is important and will provide authors, peer reviewers and editors with an important resource for use and implementation of pertinent guidance. The objective of this study was to assess whether endorsement of reporting guidelines by journals influences the completeness of reporting of health studies.
Methods
Guidelines providing a minimum set of items to guide authors in reporting a specific type of research, developed with explicit methodology, and using a consensus process will be identified from an earlier systematic review and from the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network’s reporting guidelines library. MEDLINE, EMBASE, the Cochrane Methodology Register and Scopus will be searched for evaluations of those reporting guidelines; relevant evaluations from the recently conducted CONSORT systematic review will also be included. Single data extraction with 10% verification of study characteristics, 20% of outcomes and complete verification of aspects of study validity will be carried out. We will include evaluations of reporting guidelines that assess the completeness of reporting: (1) before and after journal endorsement, and/or (2) between endorsing and non-endorsing journals. For a given guideline, analyses will be conducted for individual and the total sum of items. When possible, standard, pooled effects with 99% confidence intervals using random effects models will be calculated.
Discussion
Evidence on which guidelines have been evaluated and which are associated with improved completeness of reporting is important for various stakeholders, including editors who consider which guidelines to endorse in their journal editorial policies.
doi:10.1186/2046-4053-1-24
PMCID: PMC3482392  PMID: 22626029
Reporting guidelines; Evaluation; Systematic review; Completeness of reporting
5.  Reporting Guidelines: Optimal Use in Preventive Medicine and Public Health 
Numerous reporting guidelines are available to help authors write higher quality manuscripts more efficiently. Almost 200 are listed on the EQUATOR (Enhancing the Quality and Transparency of Health Research) Network’s website and they vary in authority, usability, and breadth, making it difficult to decide which one(s) to use. This paper provides consistent information about guidelines for preventive medicine and public health and a framework and sequential approach for selecting them.
EQUATOR guidelines were reviewed for relevance to target audiences; selected guidelines were classified as “core” (frequently recommended) or specialized, and the latter were grouped by their focus. Core and specialized guidelines were coded for indicators of authority (simultaneous publication in multiple journals, rationale, scientific background supporting each element, expertise of designers, permanent website/named group), usability (presence of checklists and examples of good reporting), and breadth (manuscript sections covered). Discrepancies were resolved by consensus. Selected guidelines are presented in four tables arranged to facilitate selection: core guidelines, all of which pertain to major research designs; guidelines for additional study designs, topical guidelines, and guidelines for particular manuscript sections. A flow diagram provides an overview. The framework and sequential approach will enable authors as well as editors, peer reviewers, researchers, and systematic reviewers to make optimal use of available guidelines to improve the transparency, clarity, and rigor of manuscripts and research protocols and the efficiency of conducing systematic reviews and meta-analyses.
doi:10.1016/j.amepre.2012.06.031
PMCID: PMC3475417  PMID: 22992369
6.  Guidelines, Editors, Pharma And The Biological Paradigm Shift 
Mens Sana Monographs  2007;5(1):27-30.
Private investment in biomedical research has increased over the last few decades. At most places it has been welcomed as the next best thing to technology itself. Much of the intellectual talent from academic institutions is getting absorbed in lucrative positions in industry. Applied research finds willing collaborators in venture capital funded industry, so a symbiotic growth is ensured for both.
There are significant costs involved too. As academia interacts with industry, major areas of conflict of interest especially applicable to biomedical research have arisen. They are related to disputes over patents and royalty, hostile encounters between academia and industry, as also between public and private enterprise, legal tangles, research misconduct of various types, antagonistic press and patient-advocate lobbies and a general atmosphere in which commercial interest get precedence over patient welfare.
Pharma image stinks because of a number of errors of omission and commission. A recent example is suppression of negative findings about Bayer's Trasylol (Aprotinin) and the marketing maneuvers of Eli Lilly's Xigris (rhAPC). Whenever there is a conflict between patient vulnerability and profit motives, pharma often tends to tilt towards the latter. Moreover there are documents that bring to light how companies frequently cross the line between patient welfare and profit seeking behaviour.
A voluntary moratorium over pharma spending to pamper drug prescribers is necessary. A code of conduct adopted recently by OPPI in India to limit pharma company expenses over junkets and trinkets is a welcome step.
Clinical practice guidelines (CPG) are considered important as they guide the diagnostic/therapeutic regimen of a large number of medical professionals and hospitals and provide recommendations on drugs, their dosages and criteria for selection. Along with clinical trials, they are another area of growing influence by the pharmaceutical industry. For example, in a relatively recent survey of 2002, it was found that about 60% of 192 authors of clinical practice guidelines reported they had financial connections with the companies whose drugs were under consideration. There is a strong case for making CPGs based not just on effectivity but cost effectivity. The various ramifications of this need to be spelt out. Work of bodies like the Appraisal of Guidelines Research and Evaluation (AGREE) Collaboration and Guidelines Advisory Committee (GAC) are also worth a close look.
Even the actions of Foundations that work for disease amelioration have come under scrutiny. The process of setting up ‘Best Practices’ Guidelines for interactions between the pharmaceutical industry and clinicians has already begun and can have important consequences for patient care. Similarly, Good Publication Practice (GPP) for pharmaceutical companies have also been set up aimed at improving the behaviour of drug companies while reporting drug trials
The rapidly increasing trend toward influence and control by industry has become a concern for many. It is of such importance that the Association of American Medical Colleges has issued two relatively new documents - one, in 2001, on how to deal with individual conflicts of interest; and the other, in 2002, on how to deal with institutional conflicts of interest in the conduct of clinical research. Academic Medical Centers (AMCs), as also medical education and research institutions at other places, have to adopt means that minimize their conflicts of interest.
Both medical associations and research journal editors are getting concerned with individual and institutional conflicts of interest in the conduct of clinical research and documents are now available which address these issues. The 2001 ICMJE revision calls for full disclosure of the sponsor's role in research, as well as assurance that the investigators are independent of the sponsor, are fully accountable for the design and conduct of the trial, have independent access to all trial data and control all editorial and publication decisions. However the findings of a 2002 study suggest that academic institutions routinely participate in clinical research that does not adhere to ICMJE standards of accountability, access to data and control of publication.
There is an inevitable slant to produce not necessarily useful but marketable products which ensure the profitability of industry and research grants outflow to academia. Industry supports new, not traditional, therapies, irrespective of what is effective. Whatever traditional therapy is supported is most probably because the company concerned has a product with a big stake there, which has remained a ‘gold standard’ or which that player thinks has still some ‘juice’ left.
Industry sponsorship is mainly for potential medications, not for trying to determine whether there may be non-pharmacological interventions that may be equally good, if not better. In the paradigm shift towards biological psychiatry, the role of industry sponsorship is not overt but probably more pervasive than many have realised, or the right thinking may consider good, for the health of the branch in the long run.
An issue of major concern is protection of the interests of research subjects. Patients agree to become research subjects not only for personal medical benefit but, as an extension, to benefit the rest of the patient population and also advance medical research.
We all accept that industry profits have to be made, and investment in research and development by the pharma industry is massive. However, we must also accept there is a fundamental difference between marketing strategies for other entities and those for drugs.
The ultimate barometer is patient welfare and no drug that compromises it can stand the test of time. So, how does it make even commercial sense in the long term to market substandard products? The greatest mistake long-term players in industry may make is try to adopt the shady techniques of the upstart new entrant. Secrecy of marketing/sales tactics, of the process of manufacture, of other strategies and plans of business expansion, of strategies to tackle competition are fine business tactics. But it is critical that secrecy as a tactic not extend to reporting of research findings, especially those contrary to one's product.
Pharma has no option but to make a quality product, do comprehensive adverse reaction profiles, and market it only if it passes both tests.
Why does pharma adopt questionable tactics? The reasons are essentially two:
What with all the constraints, a drug comes to the pharmacy after huge investments. There are crippling overheads and infrastructure costs to be recovered. And there are massive profit margins to be maintained. If these were to be dependent only on genuine drug discoveries, that would be taking too great a risk.Industry players have to strike the right balance between profit making and credibility. In profit making, the marketing champions play their role. In credibility ratings, researchers and paid spokes-persons play their role. All is hunky dory till marketing is based on credibility. When there is nothing available to make for credibility, something is projected as one and marketing carried out, in the calculated hope that profits can accrue, since profit making must continue endlessly. That is what makes pharma adopt even questionable means to make profits.
Essentially, there are four types of drugs. First, drugs that work and have minimal side-effects; second, drugs which work but have serious side-effects; third, drugs that do not work and have minimal side-effects; and fourth, drugs which work minimally but have serious side-effects. It is the second and fourth types that create major hassles for industry. Often, industry may try to project the fourth type as the second to escape censure.
The major cat and mouse game being played by conscientious researchers is in exposing the third and fourth for what they are and not allowing industry to palm them off as the first and second type respectively. The other major game is in preventing the second type from being projected as the first. The third type are essentially harmless, so they attract censure all right and some merriment at the antics to market them. But they escape anything more than a light rap on the knuckles, except when they are projected as the first type.
What is necessary for industry captains and long-term players is to realise:
Their major propelling force can only be producing the first type. 2. They accept the second type only till they can lay their hands on the first. 3. The third type can be occasionally played around with to shore up profits, but never by projecting them as the first type. 4. The fourth type are the laggards, real threat to credibility and therefore do not deserve any market hype or promotion.
In finding out why most pharma indulges in questionable tactics, we are lead to some interesting solutions to prevent such tactics with the least amount of hassles for all concerned, even as both profits and credibility are kept intact.
doi:10.4103/0973-1229.32176
PMCID: PMC3192391  PMID: 22058616
Academia; Pharmaceutical Industry; Clinical Practice Guidelines; Best Practice Guidelines; Academic Medical Centers; Medical Associations; Research Journals; Clinical Research; Public Welfare; Pharma Image; Corporate Welfare; Biological Psychiatry; Law Suits Against Industry
7.  Corporate Social Responsibility and Access to Policy Élites: An Analysis of Tobacco Industry Documents 
PLoS Medicine  2011;8(8):e1001076.
Gary Fooks and colleagues undertook a review of tobacco industry documents and show that policies on corporate social responsibility can enable access to and dialogue with policymakers at the highest level.
Background
Recent attempts by large tobacco companies to represent themselves as socially responsible have been widely dismissed as image management. Existing research supports such claims by pointing to the failings and misleading nature of corporate social responsibility (CSR) initiatives. However, few studies have focused in depth on what tobacco companies hoped to achieve through CSR or reflected on the extent to which these ambitions have been realised.
Methods and Findings
Iterative searching relating to CSR strategies was undertaken of internal British American Tobacco (BAT) documents, released through litigation in the US. Relevant documents (764) were indexed and qualitatively analysed. In the past decade, BAT has actively developed a wide-ranging CSR programme. Company documents indicate that one of the key aims of this programme was to help the company secure access to policymakers and, thereby, increase the company's chances of influencing policy decisions. Taking the UK as a case study, this paper demonstrates the way in which CSR can be used to renew and maintain dialogue with policymakers, even in ostensibly unreceptive political contexts. In practice, the impact of this political use of CSR is likely to be context specific; depending on factors such as policy élites' understanding of the credibility of companies as a reliable source of information.
Conclusions
The findings suggest that tobacco company CSR strategies can enable access to and dialogue with policymakers and provide opportunities for issue definition. CSR should therefore be seen as a form of corporate political activity. This underlines the need for broad implementation of Article 5.3 of the Framework Convention on Tobacco Control. Measures are needed to ensure transparency of interactions between all parts of government and the tobacco industry and for policy makers to be made more aware of what companies hope to achieve through CSR.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, companies and multinational corporations were judged on the profits they made. Nowadays, though, much is made of corporate social responsibility (CSR). CSR is the commitment by business to behave ethically and to contribute to economic development while improving the quality of life of the workforce, their families, the local community, and society at large. Put simply, companies and corporations now endeavor to show that they have a positive impact on the environment, consumers, employees, and society in addition to making money for their shareholders. Large tobacco companies are no exception. British American Tobacco (BAT, the world's second largest publicly traded tobacco company), for example, began working on a wide-ranging CSR program more than a decade ago. Given that tobacco is responsible for an estimated 5.4 million deaths worldwide annually, this program was initially met with hostility and dismissed as an image management exercise. However, large parts of the investment and CSR communities now approve of BAT's CSR program, which has won numerous awards.
Why Was This Study Done?
But what do BAT and other tobacco companies actually hope to achieve through their CSR initiatives and how successful have they been in achieving these aims? Few studies have addressed these important questions. In particular, there has been little research into the extent to which tobacco companies use CSR initiatives as a form of corporate political activity that can help them gain “access” to policymakers and define the legitimate concerns and optimal alternatives of public policy (“issue definition”). Access is defined as taking place when policymakers consider the views of policy advocates such as tobacco company employees and is a crucial component of issue definition, which refers to the strategies adopted by bodies such as multinational corporations to influence the policy agenda by defining what issues public policy should concern itself with and how it should approach them. In this case study, the researchers explore whether BAT's CSR program works as a form of corporate political activity by systematically examining internal BAT documents made publicly available as a result of US litigation. Specifically, the researchers examine BAT's efforts through its CSR program to reestablish access with the UK Department of Health following the department's decision in the late 1990s to restrict contact with major tobacco companies.
What Did the Researchers Do and Find?
Using iterative searching, the researchers identified 764 documents in the Legacy Tobacco Documents Library (a large collection of internal tobacco company documents released as a result of US litigation cases) that contain information relevant to BAT's CSR strategies. Their analysis of these documents indicates that one of the key aims of the CSR program actively developed over the past decade by BAT was to help secure access to policymakers and shows how BAT used CSR to renew and maintain dialogue with policymakers at a time when contact between government and tobacco companies was extremely restricted. The documents also show that BAT employees used CSR initiatives as a means of issue definition to both optimize the probability of subsequent discussions taking place and to frame their content. Finally, the documents illustrate how BAT used its CSR program to expand the number of access points across government, thereby providing BAT with more opportunities to meet and talk to officials.
What Do These Findings Mean?
These findings suggest that CSR is a form of corporate political activity that potentially has important implications for public health given the documented impact of the political activity of tobacco companies in delaying and blocking health-related tobacco control policies. In practice, the impact of the political use of CSR is likely to be context specific and will depend on factors such as whether senior policymakers regard companies as reliable sources of information. Importantly, these findings underline the need for broad implementation of Article 5.3 of the World Health Organization's Framework Convention on Tobacco Control (FCTC), an international treaty that calls for the introduction of multiple measures to reduce tobacco consumption, including tobacco advertizing bans and relevant taxation policies. Article 5.3 aims to protect public-health policies on tobacco control from tobacco industry influence. The findings of this study indicate that implementation of Article 5.3 will require measures that ensure transparency in interactions between all parts of government and the tobacco industry and will need an increased awareness across government of what tobacco companies hope to achieve through CSR.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001076.
The Corporate Responsibility (CORE) coalition, an alliance of voluntary organizations, trade unions, and companies, maintains a Web site that contains useful material on corporate social responsibility
The European Coalition for Corporate Justice (ECCJ) promotes corporate accountability by bringing together national platforms of civil society organizations (including NGOs, trade unions, consumer advocacy groups, and academic institutions) from all over Europe
The Legacy Tobacco Documents Library is a public, searchable database of tobacco company internal documents detailing their advertising, manufacturing, marketing, sales, and scientific activities
The World Health Organization provides information about the dangers of tobacco (in several languages), details of the Framework Convention on Tobacco Control (in several languages), and guidelines for the implementation of Article 5.3 of the FCTC
The Framework Convention Alliance provides more information about the FCTC
For information about tobacco industry influence on policy, see the 2009 World Health Organization report Tobacco interference with tobacco control
doi:10.1371/journal.pmed.1001076
PMCID: PMC3160341  PMID: 21886485
8.  Estimating the Global Clinical Burden of Plasmodium falciparum Malaria in 2007 
PLoS Medicine  2010;7(6):e1000290.
Simon Hay and colleagues derive contemporary estimates of the global clinical burden of Plasmodium falciparum malaria (the deadliest form of malaria) using cartography-based techniques.
Background
The epidemiology of malaria makes surveillance-based methods of estimating its disease burden problematic. Cartographic approaches have provided alternative malaria burden estimates, but there remains widespread misunderstanding about their derivation and fidelity. The aims of this study are to present a new cartographic technique and its application for deriving global clinical burden estimates of Plasmodium falciparum malaria for 2007, and to compare these estimates and their likely precision with those derived under existing surveillance-based approaches.
Methods and Findings
In seven of the 87 countries endemic for P. falciparum malaria, the health reporting infrastructure was deemed sufficiently rigorous for case reports to be used verbatim. In the remaining countries, the mapped extent of unstable and stable P. falciparum malaria transmission was first determined. Estimates of the plausible incidence range of clinical cases were then calculated within the spatial limits of unstable transmission. A modelled relationship between clinical incidence and prevalence was used, together with new maps of P. falciparum malaria endemicity, to estimate incidence in areas of stable transmission, and geostatistical joint simulation was used to quantify uncertainty in these estimates at national, regional, and global scales.
Combining these estimates for all areas of transmission risk resulted in 451 million (95% credible interval 349–552 million) clinical cases of P. falciparum malaria in 2007. Almost all of this burden of morbidity occurred in areas of stable transmission. More than half of all estimated P. falciparum clinical cases and associated uncertainty occurred in India, Nigeria, the Democratic Republic of the Congo (DRC), and Myanmar (Burma), where 1.405 billion people are at risk.
Recent surveillance-based methods of burden estimation were then reviewed and discrepancies in national estimates explored. When these cartographically derived national estimates were ranked according to their relative uncertainty and replaced by surveillance-based estimates in the least certain half, 98% of the global clinical burden continued to be estimated by cartographic techniques.
Conclusions and Significance
Cartographic approaches to burden estimation provide a globally consistent measure of malaria morbidity of known fidelity, and they represent the only plausible method in those malaria-endemic countries with nonfunctional national surveillance. Unacceptable uncertainty in the clinical burden of malaria in only four countries confounds our ability to evaluate needs and monitor progress toward international targets for malaria control at the global scale. National prevalence surveys in each nation would reduce this uncertainty profoundly. Opportunities for further reducing uncertainty in clinical burden estimates by hybridizing alternative burden estimation procedures are also evaluated.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria is a major global public-health problem. Nearly half the world's population is at risk of malaria, and Plasmodium falciparum malaria—the deadliest form of the disease—causes about one million deaths each year. Malaria is a parasitic disease that is transmitted to people through the bite of an infected mosquito. These insects inject a parasitic form known as sporozoites into people, where they replicate briefly inside liver cells. The liver cells then release merozoites (another parasitic form), which invade red blood cells. Here, the merozoites replicate rapidly before bursting out and infecting more red blood cells. This increase in the parasitic burden causes malaria's characteristic symptoms—debilitating and recurring fevers and chills. Infected red blood cells also release gametocytes, which infect mosquitoes when they take a blood meal. In the mosquito, the gametocytes multiply and develop into sporozoites, thus completing the parasite's life cycle. Malaria can be prevented by controlling the mosquitoes that spread the parasite and by avoiding mosquito bites. Effective treatment with antimalarial drugs also helps to reduce malaria transmission.
Why Was This Study Done?
In 1998, the World Health Organization (WHO) and several other international agencies launched Roll Back Malaria, a global partnership that aims to provide a coordinated, global approach to fighting malaria. For this or any other malaria control initiative to be effective, however, an accurate picture of the global clinical burden of malaria (how many people become ill because of malaria and where they live) is needed so that resources can be concentrated where they will have the most impact. Estimates of the global burden of many infectious diseases are obtained using data collected by national surveillance systems. Unfortunately, this approach does not work very well for malaria because in places where malaria is endemic (always present), diagnosis is often inaccurate and national reporting is incomplete. In this study, therefore, the researchers use an alternative, “cartographic” method for estimating the global clinical burden of P. falciparum malaria.
What Did the Researchers Do and Find?
The researchers identified seven P. falciparum malaria-endemic countries that had sufficiently reliable health information systems to determine the national clinical malaria burden in 2007 directly. They divided the other 80 malaria endemic countries into countries with a low risk of transmission (unstable transmission) and countries with a moderate or high risk of transmission (stable transmission). In countries with unstable transmission, the researchers assumed a uniform annual clinical incidence rate of 0.1 cases per 1,000 people and multiplied this by population sizes to get disease burden estimates. In countries with stable transmission, they used a modeled relationship between clinical incidence (number of new cases in a population per year) and prevalence (the proportion of a population infected with malaria parasites) and a global malaria endemicity map (a map that indicates the risk of malaria infection in different countries) to estimate malaria incidences. Finally, they used a technique called “joint simulation” to quantify the uncertainty in these estimates. Together, these disease burden estimates gave an estimated global burden of 451 million clinical cases of P. falciparum in 2007. Most of these cases occurred in areas of stable transmission and more than half occurred in India, Nigeria, the Democratic Republic of the Congo, and Myanmar. Importantly, these four nations alone contributed nearly half of the uncertainty in the global incidence estimates.
What Do These Findings Mean?
These findings are extremely valuable because they provide a global map of malaria cases that should facilitate the implementation and evaluation of malaria control programs. However, the estimate of the global clinical burden of P. falciparum malaria reported here is higher than the WHO estimate of 247 million cases each year that was obtained using surveillance-based methods. The discrepancy between the estimates obtained using the cartographic and the surveillance-based approach is particularly marked for India. The researchers discuss possible reasons for these discrepancies and suggest improvements that could be made to both methods to increase the validity and precision of estimates. Finally, they note that improvements in the national prevalence surveys in India, Nigeria, the Democratic Republic of the Congo, and Myanmar would greatly reduce the uncertainty associated with their estimate of the global clinical burden of malaria, an observation that should encourage efforts to improve malaria surveillance in these countries.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000261.
A PLoS Medicine Health in Action article by Hay and colleagues, a Research Article by Guerra and colleagues, and a Research Article by Hay and colleagues provide further details about the global mapping of malaria risk
Additional national and regional level maps and more information on the global mapping of malaria are available at the Malaria Atlas Project
Information is available from the World Health Organization on malaria (in several languages)
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria (in English and French)
MedlinePlus provides links to additional information on malaria (in English and Spanish)
doi:10.1371/journal.pmed.1000290
PMCID: PMC2885984  PMID: 20563310
9.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
10.  CONSORT for Reporting Randomized Controlled Trials in Journal and Conference Abstracts: Explanation and Elaboration 
PLoS Medicine  2008;5(1):e20.
Background
Clear, transparent, and sufficiently detailed abstracts of conferences and journal articles related to randomized controlled trials (RCTs) are important, because readers often base their assessment of a trial solely on information in the abstract. Here, we extend the CONSORT (Consolidated Standards of Reporting Trials) Statement to develop a minimum list of essential items, which authors should consider when reporting the results of a RCT in any journal or conference abstract.
Methods and Findings
We generated a list of items from existing quality assessment tools and empirical evidence. A three-round, modified-Delphi process was used to select items. In all, 109 participants were invited to participate in an electronic survey; the response rate was 61%. Survey results were presented at a meeting of the CONSORT Group in Montebello, Canada, January 2007, involving 26 participants, including clinical trialists, statisticians, epidemiologists, and biomedical editors. Checklist items were discussed for eligibility into the final checklist. The checklist was then revised to ensure that it reflected discussions held during and subsequent to the meeting. CONSORT for Abstracts recommends that abstracts relating to RCTs have a structured format. Items should include details of trial objectives; trial design (e.g., method of allocation, blinding/masking); trial participants (i.e., description, numbers randomized, and number analyzed); interventions intended for each randomized group and their impact on primary efficacy outcomes and harms; trial conclusions; trial registration name and number; and source of funding. We recommend the checklist be used in conjunction with this explanatory document, which includes examples of good reporting, rationale, and evidence, when available, for the inclusion of each item.
Conclusions
CONSORT for Abstracts aims to improve reporting of abstracts of RCTs published in journal articles and conference proceedings. It will help authors of abstracts of these trials provide the detail and clarity needed by readers wishing to assess a trial's validity and the applicability of its results.
The authors extend the CONSORT Statement to develop a minimum list of essential items to consider when reporting the results of a randomized trial in any journal or conference abstract.
doi:10.1371/journal.pmed.0050020
PMCID: PMC2211558  PMID: 18215107
11.  Transparency in Evidence Evaluation And Formulary Decision-Making 
Pharmacy and Therapeutics  2013;38(8):465-483.
The authors sought to clarify the relationship between evidence-based medicine and access to drug coverage. They concluded that a structured approach to improving clarity, consistency, and transparency was lacking.
Objective:
Establishing a better understanding of the relationship between evidence evaluation and formulary decision-making has important implications for patients, payers, and providers. The goal of our study was to develop and test a structured approach to evidence evaluation to increase clarity, consistency, and transparency in formulary decision-making.
Study Design:
The study comprised three phases. First, an expert panel identified key constructs to formulary decision-making and created an evidence-assessment tool. Second, with the use of a balanced incomplete block design, the tool was validated by a large group of decision-makers. Third, the tool was pilot-tested in a real-world P&T committee environment.
Methods:
An expert panel identified key factors associated with formulary access by rating the level of access that they would give a drug in various hypothetical scenarios. These findings were used to formulate an evidence-assessment tool that was externally validated by surveying a larger sample of decision-makers. Last, the tool was pilot-tested in a real-world environment where P&T committees used it to review new drugs.
Results:
Survey responses indicated that a structured approach in the formulary decision-making process could yield greater clarity, consistency, and transparency in decision-making; however, pilot-testing of the structured tool in a real-world P&T committee environment highlighted some of the limitations of our structured approach.
Conclusion:
Although a structured approach to formulary decision-making is beneficial for patients, health care providers, and other stakeholders, this benefit was not realized in a real-world environment. A method to improve clarity, consistency, and transparency is still needed.
PMCID: PMC3814436  PMID: 24222979
evidence; decision-making; formulary; access
12.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
13.  Reporting Methods of Blinding in Randomized Trials Assessing Nonpharmacological Treatments  
PLoS Medicine  2007;4(2):e61.
Background
Blinding is a cornerstone of treatment evaluation. Blinding is more difficult to obtain in trials assessing nonpharmacological treatment and frequently relies on “creative” (nonstandard) methods. The purpose of this study was to systematically describe the strategies used to obtain blinding in a sample of randomized controlled trials of nonpharmacological treatment.
Methods and Findings
We systematically searched in Medline and the Cochrane Methodology Register for randomized controlled trials (RCTs) assessing nonpharmacological treatment with blinding, published during 2004 in high-impact-factor journals. Data were extracted using a standardized extraction form. We identified 145 articles, with the method of blinding described in 123 of the reports. Methods of blinding of participants and/or health care providers and/or other caregivers concerned mainly use of sham procedures such as simulation of surgical procedures, similar attention-control interventions, or a placebo with a different mode of administration for rehabilitation or psychotherapy. Trials assessing devices reported various placebo interventions such as use of sham prosthesis, identical apparatus (e.g., identical but inactivated machine or use of activated machine with a barrier to block the treatment), or simulation of using a device. Blinding participants to the study hypothesis was also an important method of blinding. The methods reported for blinding outcome assessors relied mainly on centralized assessment of paraclinical examinations, clinical examinations (i.e., use of video, audiotape, photography), or adjudications of clinical events.
Conclusions
This study classifies blinding methods and provides a detailed description of methods that could overcome some barriers of blinding in clinical trials assessing nonpharmacological treatment, and provides information for readers assessing the quality of results of such trials.
An assessment of blinding methods used in nonpharmacological trials published in one year in high-impact factor journals classifies methods used and describes methods that could overcome some barriers of blinding.
Editors' Summary
Background.
Well-conducted “randomized controlled trials” are generally considered to be the most reliable source of information about the effects of medical treatments. In a randomized trial, the play of chance is used to decide whether each patient receives the treatment under investigation, or whether he/she is assigned to a “control” group receiving the standard treatment for their condition. This helps makes sure that the two groups of patients receiving the different treatments are equivalent at the start of the trial. Proper randomization also prevents doctors from deciding which treatment individual patients are given, which could distort the results. An additional technique used is “blinding,” which involves taking steps to prevent patients, doctors, or other people involved in the trial (e.g., those recording measurements) from finding out which patients have received which treatment. Properly done, blinding should make sure the results of a trial are more accurate. This is because in an unblinded study, participants may respond better if they know they have received a promising new treatment (or worse if they only got a placebo or an old drug). In addition, doctors and others in the research team may “want” a particular treatment to perform better in the trial, and unthinking bias could creep into their measurements or actions. However, blinding is not a simple, single step; the people carrying out the trial often have to set up a variety of different procedures.
Why Was This Study Done?
The authors of this study had already conducted research into the way in which blinding is done in trials involving drug (“pharmacological”) treatment. Their work was published in October 2006 in PLoS Medicine. However, concealing from patients the type of pill that they are being given is much easier than, for example, concealing whether or not they are having surgery or whether or not they are having psychotherapy. The authors therefore set out to look at the methods that are in use for blinding in nonpharmacological trials. They hoped that a better understanding of the different blinding methods would help people doing trials to design better trials in the future, and also help readers to interpret the quality of completed trials.
What Did the Researchers Do and Find?
The authors systematically searched the published medical literature to find all randomized, blinded drug trials published in just one year (2004) in a number of different “high-impact” journals (well-regarded journals whose articles are often mentioned in other articles). Then, they classified information from the published trial reports. They ended up with 145 trial reports, of which 123 described how blinding was done. The trials covered a wide range of medical conditions and types of treatment. The blinding methods used mainly involved the use of “sham” procedures. Thus, in 80% of the studies in which the treatment involved a medical device, a pretend device had been used to make patients in the control group think they were receiving treatment. In many of the treatments involving surgery, researchers had devised elaborate ways of making patients think they had had an operation. When the treatment involved manipulation (e.g. physiotherapy or chiropractic), fake “hands-on” techniques were given to the control patients. The authors of this systematic review classify all the other techniques that were used to blind both the patients and members of the research teams. They found that some highly innovative ideas have been successfully put into practice.
What Do These Findings Mean?
The authors have provided a detailed description of methods that could overcome some barriers of blinding in clinical trials assessing nonpharmacological treatment. The classification of the techniques used will be useful for other researchers considering what sort of blinding they will use in their own research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040061.
The James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
ClinicalTrials.gov, a trial registry created by the US National Institutes of Health, has an introduction to understanding clinical trials
The UK National Health Service National Electronic Library for Health has an introduction to controlled clinical trials
The CONSORT statement is intended to strengthen evidence-based reporting of clinical trials
doi:10.1371/journal.pmed.0040061
PMCID: PMC1800311  PMID: 17311468
14.  Methods of Blinding in Reports of Randomized Controlled Trials Assessing Pharmacologic Treatments: A Systematic Review 
PLoS Medicine  2006;3(10):e425.
Background
Blinding is a cornerstone of therapeutic evaluation because lack of blinding can bias treatment effect estimates. An inventory of the blinding methods would help trialists conduct high-quality clinical trials and readers appraise the quality of results of published trials. We aimed to systematically classify and describe methods to establish and maintain blinding of patients and health care providers and methods to obtain blinding of outcome assessors in randomized controlled trials of pharmacologic treatments.
Methods and Findings
We undertook a systematic review of all reports of randomized controlled trials assessing pharmacologic treatments with blinding published in 2004 in high impact-factor journals from Medline and the Cochrane Methodology Register. We used a standardized data collection form to extract data. The blinding methods were classified according to whether they primarily (1) established blinding of patients or health care providers, (2) maintained the blinding of patients or health care providers, and (3) obtained blinding of assessors of the main outcomes. We identified 819 articles, with 472 (58%) describing the method of blinding. Methods to establish blinding of patients and/or health care providers concerned mainly treatments provided in identical form, specific methods to mask some characteristics of the treatments (e.g., added flavor or opaque coverage), or use of double dummy procedures or simulation of an injection. Methods to avoid unblinding of patients and/or health care providers involved use of active placebo, centralized assessment of side effects, patients informed only in part about the potential side effects of each treatment, centralized adapted dosage, or provision of sham results of complementary investigations. The methods reported for blinding outcome assessors mainly relied on a centralized assessment of complementary investigations, clinical examination (i.e., use of video, audiotape, or photography), or adjudication of clinical events.
Conclusions
This review classifies blinding methods and provides a detailed description of methods that could help trialists overcome some barriers to blinding in clinical trials and readers interpret the quality of pharmalogic trials.
Following a systematic review of all reports of randomized controlled trials assessing pharmacologic treatments involving blinding, a classification of blinding methods is proposed.
Editors' Summary
Background.
In evidence-based medicine, good-quality randomized controlled trials are generally considered to be the most reliable source of information about the effects of different treatments, such as drugs. In a randomized trial, patients are assigned to receive one treatment or another by the play of chance. This technique helps makes sure that the two groups of patients receiving the different treatments are equivalent at the start of the trial. Proper randomization also prevents doctors from controlling or affecting which treatment patients get, which could distort the results. An additional tool that is also used to make trials more precise is “blinding.” Blinding involves taking steps to prevent patients, doctors, or other people involved in the trial (e.g., those people recording measurements) from finding out which patients got what treatment. Properly done, blinding should make sure the results of a trial are more accurate. This is because in an unblinded study, participants may respond better if they know they have received a promising new treatment (or worse if they only got placebo or an old drug); doctors may “want” a particular treatment to do better in the trial, and unthinking bias could creep into their measurements or actions; the same applies for practitioners and researchers who record patients' outcomes in the trial. However, blinding is not a simple, single step; the people carrying out the trial often have to set up a variety of different procedures that depend on the type of trial that is being done.
Why Was This Study Done?
The researchers here wanted to thoroughly examine different methods that have been used to achieve blinding in randomized trials of drug treatments, and to describe and classify them. They hoped that a better understanding of the different blinding methods would help people doing trials to design better trials in the future, and also help readers to interpret the quality of trials that had been done.
What Did the Researchers Do and Find?
This group of researchers conducted what is called a “systematic review.” They systematically searched the published medical literature to find all randomized, blinded drug trials published in 2004 in a number of different “high-impact” journals (journals whose articles are often mentioned in other articles). Then, the researchers classified information from the published trial reports. The researchers ended up with 819 trial reports, and nearly 60% of them described how blinding was done. Their classification of blinding was divided up into three main areas. First, they detailed methods used to hide which drugs are given to particular patients, such as preparing identically appearing treatments; using strong flavors to mask taste; matching the colors of pills; using saline injections and so on. Second, they described a number of methods that could be used to reduce the risk of unblinding (of doctors or patients), such as using an “active placebo” (a sugar pill that mimics some of the expected side effects of the drug treatment). Finally, they defined methods for blinded measurement of outcomes (such as using a central committee to collect data).
What Do These Findings Mean?
The researchers' classification will help people to work out how different techniques can be used to achieve, and keep, blinding in a trial. This will assist others to understand whether any particular trial was likely to have been blinded properly, and therefore work out whether the results are reliable. The researchers also suggest that, generally, blinding methods are not described in enough detail in published scientific papers, and recommend that guidelines for describing results of randomized trials be improved.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030425.
James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries
ClinicalTrials.gov, a trial registry created by the US National Institutes of Health, has an introduction to understanding clinical trials
National Electronic Library for Health introduction to controlled clinical trials
doi:10.1371/journal.pmed.0030425
PMCID: PMC1626553  PMID: 17076559
15.  The Open Science Peer Review Oath 
F1000Research  2014;3:271.
One of the foundations of the scientific method is to be able to reproduce experiments and corroborate the results of research that has been done before. However, with the increasing complexities of new technologies and techniques, coupled with the specialisation of experiments, reproducing research findings has become a growing challenge. Clearly, scientific methods must be conveyed succinctly, and with clarity and rigour, in order for research to be reproducible. Here, we propose steps to help increase the transparency of the scientific method and the reproducibility of research results: specifically, we introduce a peer-review oath and accompanying manifesto. These have been designed to offer guidelines to enable reviewers (with the minimum friction or bias) to follow and apply open science principles, and support the ideas of transparency, reproducibility and ultimately greater societal impact. Introducing the oath and manifesto at the stage of peer review will help to check that the research being published includes everything that other researchers would need to successfully repeat the work. Peer review is the lynchpin of the publishing system: encouraging the community to consciously (and conscientiously) uphold these principles should help to improve published papers, increase confidence in the reproducibility of the work and, ultimately, provide strategic benefits to authors and their institutions. Future incarnations of the various national Research Excellence Frameworks (REFs) will evolve away from simple citations towards measurable societal value and impact. The proposed manifesto aspires to facilitate this goal by making transparency, reproducibility and citizen-scientist engagement (with the knowledge-creation and dissemination processes) the default parameters for performing sound research.
doi:10.12688/f1000research.5686.1
PMCID: PMC4304228  PMID: 25653839
16.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
17.  Preparing raw clinical data for publication: guidance for journal editors, authors, and peer reviewers 
Trials  2010;11:9.
In recognition of the benefits of transparent reporting, many peer-reviewed journals require that their authors be prepared to share their raw, unprocessed data with other scientists and/or state the availability of raw data in published articles. But little information on how data should be prepared for publication - or sharing - has emerged. In clinical research patient privacy and consent for use of personal health information are key considerations, but agreed-upon definitions of what constitutes anonymised patient information do not appear to have been established. We aim to address this issue by providing practical guidance for those involved in the publication process, by proposing a minimum standard for de-identifying datasets for the purposes of publication in a peer-reviewed biomedical journal, or sharing with other researchers. Basic advice on file preparation is provided along with procedural guidance on prospective and retrospective publication of raw data, with an emphasis on randomised controlled trials.
In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal web sites.
doi:10.1186/1745-6215-11-9
PMCID: PMC2825513  PMID: 20113465
18.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Background
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
Background
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001308.
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
doi:10.1371/journal.pmed.1001308
PMCID: PMC3439420  PMID: 22984354
19.  Use of Expert Panels to Define the Reference Standard in Diagnostic Research: A Systematic Review of Published Methods and Reporting 
PLoS Medicine  2013;10(10):e1001531.
Loes C. M. Bertens and colleagues survey the published diagnostic research literature for use of expert panels to define the reference standard, characterize components and missing information, and recommend elements that should be reported in diagnostic studies.
Please see later in the article for the Editors' Summary
Background
In diagnostic studies, a single and error-free test that can be used as the reference (gold) standard often does not exist. One solution is the use of panel diagnosis, i.e., a group of experts who assess the results from multiple tests to reach a final diagnosis in each patient. Although panel diagnosis, also known as consensus or expert diagnosis, is frequently used as the reference standard, guidance on preferred methodology is lacking. The aim of this study is to provide an overview of methods used in panel diagnoses and to provide initial guidance on the use and reporting of panel diagnosis as reference standard.
Methods and Findings
PubMed was systematically searched for diagnostic studies applying a panel diagnosis as reference standard published up to May 31, 2012. We included diagnostic studies in which the final diagnosis was made by two or more persons based on results from multiple tests. General study characteristics and details of panel methodology were extracted. Eighty-one studies were included, of which most reported on psychiatry (37%) and cardiovascular (21%) diseases. Data extraction was hampered by incomplete reporting; one or more pieces of critical information about panel reference standard methodology was missing in 83% of studies. In most studies (75%), the panel consisted of three or fewer members. Panel members were blinded to the results of the index test results in 31% of studies. Reproducibility of the decision process was assessed in 17 (21%) studies. Reported details on panel constitution, information for diagnosis and methods of decision making varied considerably between studies.
Conclusions
Methods of panel diagnosis varied substantially across studies and many aspects of the procedure were either unclear or not reported. On the basis of our review, we identified areas for improvement and developed a checklist and flow chart for initial guidance for researchers conducting and reporting of studies involving panel diagnosis.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before any disease or condition can be treated, a correct diagnosis of the condition has to be made. Faced with a patient with medical problems and no diagnosis, a doctor will ask the patient about their symptoms and medical history and generally will examine the patient. On the basis of this questioning and examination, the clinician will form an initial impression of the possible conditions the patient may have, usually with a most likely diagnosis in mind. To support or reject the most likely diagnosis and to exclude the other possible diagnoses, the clinician will then order a series of tests and diagnostic procedures. These may include laboratory tests (such as the measurement of blood sugar levels), imaging procedures (such as an MRI scan), or functional tests (such as spirometry, which tests lung function). Finally, the clinician will use all the data s/he has collected to reach a firm diagnosis and will recommend a program of treatment or observation for the patient.
Why Was This Study Done?
Researchers are continually looking for new, improved diagnostic tests and multivariable diagnostic models—combinations of tests and characteristics that point to a diagnosis. Diagnostic research, which assesses the accuracy of new tests and models, requires that each patient involved in a diagnostic study has a final correct diagnosis. Unfortunately, for most conditions, there is no single, error-free test that can be used as the reference (gold) standard for diagnosis. If an imperfect reference standard is used, errors in the final disease classification may bias the results of the diagnostic study and may lead to a new test being adopted that is actually less accurate than existing tests. One widely used solution to the lack of a reference standard is “panel diagnosis” in which two or more experts assess the results from multiple tests to reach a final diagnosis for each patient in a diagnostic study. However, there is currently no formal guidance available on the conduct and reporting of panel diagnosis. Here, the researchers undertake a systematic review (a study that uses predefined criteria to identify research on a given topic) to provide an overview of the methodology and reporting of panel diagnosis.
What Did the Researchers Do and Find?
The researchers identified 81 published diagnostic studies that used panel diagnosis as a reference standard. 37% of these studies reported on psychiatric diseases, 21% reported on cardiovascular diseases, and 12% reported on respiratory diseases. Most of the studies (64%) were designed to assess the accuracy of one or more diagnostic test. Notably, one or more critical piece of information on methodology was missing in 83% of the studies. Specifically, information on the constitution of the panel was missing in a quarter of the studies and information on the decision-making process (whether, for example, a diagnosis was reached by discussion among panel members or by combining individual panel member's assessments) was incomplete in more than two-thirds of the studies. In three-quarters of the studies for which information was available, the panel consisted of only two or three members; different fields of expertise were represented in the panels in nearly two-thirds of the studies. In a third of the studies for which information was available, panel members made their diagnoses without access to the results of the test being assessed. Finally, the reproducibility of the decision-making process was assessed in a fifth of the studies.
What Do These Findings Mean?
These findings indicate that the methodology of panel diagnosis varies substantially among diagnostic studies and that reporting of this methodology is often unclear or absent. Both the methodology and reporting of panel diagnosis could, therefore, be improved substantially. Based on their findings, the researchers provide a checklist and flow chart to help guide the conduct and reporting of studies involving panel diagnosis. For example, they suggest that, when designing a study that uses panel diagnosis as the reference standard, the number and background of panel members should be considered, and they provide a list of options that should be considered when planning the decision-making process. Although more research into each of the options identified by the researchers is needed, their recommendations provide a starting point for the development of formal guidelines on the methodology and reporting of panel diagnosis for use as a reference standard in diagnostic research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001531.
Wikipedia has a page on medical diagnosis (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The Equator Network is an international initiative that seeks to improve the reliability and value of medical research literature by promoting transparent and accurate reporting of research studies; its website includes information on a wide range of reporting guidelines, including the STAndards for the Reporting of Diagnostic accuracy studies (STARD), an initiative that aims to improve the accuracy and completeness of reporting of studies of diagnostic accuracy
doi:10.1371/journal.pmed.1001531
PMCID: PMC3797139  PMID: 24143138
20.  Promotional Tone in Reviews of Menopausal Hormone Therapy After the Women's Health Initiative: An Analysis of Published Articles 
PLoS Medicine  2011;8(3):e1000425.
Adriane Fugh-Berman and colleagues analyzed a selection of published opinion pieces on hormone therapy and show that there may be a connection between receiving industry funding for speaking, consulting, or research and the tone of such opinion pieces.
Background
Even after the Women's Health Initiative (WHI) found that the risks of menopausal hormone therapy (hormone therapy) outweighed benefit for asymptomatic women, about half of gynecologists in the United States continued to believe that hormones benefited women's health. The pharmaceutical industry has supported publication of articles in medical journals for marketing purposes. It is unknown whether author relationships with industry affect promotional tone in articles on hormone therapy. The goal of this study was to determine whether promotional tone could be identified in narrative review articles regarding menopausal hormone therapy and whether articles identified as promotional were more likely to have been authored by those with conflicts of interest with manufacturers of menopausal hormone therapy.
Methods and Findings
We analyzed tone in opinion pieces on hormone therapy published in the four years after the estrogen-progestin arm of the WHI was stopped. First, we identified the ten authors with four or more MEDLINE-indexed reviews, editorials, comments, or letters on hormone replacement therapy or menopausal hormone therapy published between July 2002 and June 2006. Next, we conducted an additional search using the names of these authors to identify other relevant articles. Finally, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. Scientific accuracy was assessed based on whether or not the findings of the WHI were accurately reported using two criteria: (1) Acknowledgment or lack of denial of the risk of breast cancer diagnosis associated with hormone therapy, and (2) acknowledgment that hormone therapy did not benefit cardiovascular disease endpoints. Determination of promotional tone was based on the assessment by each reader of whether the article appeared to promote hormone therapy. Analysis of inter-rater consistency found moderate agreement for scientific accuracy (κ = 0.57) and substantial agreement for promotional tone (κ = 0.65). After discussion, readers found 86% of the articles to be scientifically accurate and 64% to be promotional in tone. Themes that were common in articles considered promotional included attacks on the methodology of the WHI, arguments that clinical trial results should not guide treatment for individuals, and arguments that observational studies are as good as or better than randomized clinical trials for guiding clinical decisions. The promotional articles we identified also implied that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Of the ten authors studied, eight were found to have declared payment for speaking or consulting on behalf of menopausal hormone manufacturers or for research support (seven of these eight were speakers or consultants). Thirty of 32 articles (90%) evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest, compared to 11 of 18 articles (61%) by those without such conflicts (p = 0.0025). Articles promoting the use of menopausal hormone therapy were 2.41 times (95% confidence interval 1.49–4.93) as likely to have been authored by authors with conflicts of interest as by authors without conflicts of interest. In articles from three authors with conflicts of interest some of the same text was repeated word-for-word in different articles.
Conclusion
There may be a connection between receiving industry funding for speaking, consulting, or research and the publication of promotional opinion pieces on menopausal hormone therapy.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Over the past three decades, menopausal hormones have been heavily promoted for preventing disease in women. However, the Women's Health Initiative (WHI) study—which enrolled more than 26,000 women in the US and which was published in 2004—found that estrogen-progestin and estrogen-only formulations (often prescribed to women around the age of menopause) increased the risk of stroke, deep vein thrombosis, dementia, and incontinence. Furthermore, this study found that the estrogen-progestin therapy increased rates of breast cancer. In fact, the estrogen-progestin arm of the WHI study was stopped in 2002 due to harmful findings, and the estrogen-only arm was stopped in 2004, also because of harmful findings. In addition, the study also found that neither therapy reduced cardiovascular risk or markedly benefited health-related quality of life measures.
Despite these results, two years after the results of WHI study were published, a survey of over 700 practicing gynecologists—the specialists who prescribe the majority of menopausal hormone therapies—in the US found that almost half did not find the findings of the WHI study convincing and that 48% disagreed with the decision to stop the trial early. Furthermore, follow-up surveys found similar results.
Why Was This Study Done?
It is unclear why gynecologists and other physicians continue to prescribe menopausal hormone therapies despite the results of the WHI. Some academics argue that published industry-funded reviews and commentaries may be designed to convey specific, but subtle, marketing messages and several academic analyses have used internal industry documents disclosed in litigation cases. So this study was conducted to investigate whether hormone therapy–promoting tone could be identified in narrative review articles and if so, whether these articles were more likely to have been authored by people who had accepted funding from hormone manufacturers.
What Did the Researchers Do and Find?
The researchers conducted a comprehensive literature search that identified 340 relevant articles published between July 2002 and June 2006—the four years following the cessation of the estrogen-progestin arm of the women's health initiative study. Ten authors had published four to six articles, 47 authored two or three articles, and 371 authored one article each. The researchers focused on authors who had published four or more articles in the four-year period under study and, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. After individually analyzing a batch of articles, the readers met to provide their initial assessments, to discuss them, and to reach consensus on tone and scientific accuracy. Then after the papers were evaluated, each author was identified and the researchers searched for authors' potential financial conflicts of interest, defined as publicly disclosed information that the authors had received payment for research, speaking, or consulting on behalf of a manufacturer of menopausal hormone therapy.
Common themes in the 50 articles included arguments that clinical trial results should not guide treatment for individuals and suggestions that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Furthermore, of the ten authors studied, eight were found to have received payment for research, speaking or consulting on behalf of menopause hormone manufacturers, and 30 of 32 articles evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest. Articles promoting the use of menopausal hormone therapy were more than twice as likely to have been written by authors with conflicts of interest as by authors without conflicts of interest. Furthermore, Three authors who were identified as having financial conflicts of interest were authors on articles where sections of their previously published articles were repeated word-for-word without citation.
What Do These Findings Mean?
The findings of this study suggest that there may be a link between receiving industry funding for speaking, consulting, or research and the publication of apparently promotional opinion pieces on menopausal hormone therapy. Furthermore, such publications may encourage physicians to continue prescribing these therapies to women of menopausal age. Therefore, physicians and other health care providers should interpret the content of review articles with caution. In addition, medical journals should follow the International Committee of Medical Journal Editors Uniform Requirements for Manuscripts, which require that all authors submit signed statements of their participation in authorship and full disclosure of any conflicts of interest.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000425.
The US National Heart, Lung, and Blood Institute has more information on the Womens Health Initiative
The US National Institutes of Health provide more information about the effects of menopausal hormone replacement therapy
The Office of Women's Health, U.S. Department of Health and Human Services provides information on menopausal hormone therapy
The International Committee of Medical Journal Editors Uniform Requirements for Manuscripts presents Uniform Requirements for Manuscripts published in biomedical journals
The National Womens Health Network, a consumer advocacy group that takes no industry money, has factsheets and articles about menopausal hormone therapy
PharmedOut, a Georgetown University Medical Center project, has many resources on pharmaceutical marketing practices
doi:10.1371/journal.pmed.1000425
PMCID: PMC3058057  PMID: 21423581
21.  Epidemiology and Reporting Characteristics of Systematic Reviews 
PLoS Medicine  2007;4(3):e78.
Background
Systematic reviews (SRs) have become increasingly popular to a wide range of stakeholders. We set out to capture a representative cross-sectional sample of published SRs and examine them in terms of a broad range of epidemiological, descriptive, and reporting characteristics, including emerging aspects not previously examined.
Methods and Findings
We searched Medline for SRs indexed during November 2004 and written in English. Citations were screened and those meeting our inclusion criteria were retained. Data were collected using a 51-item data collection form designed to assess the epidemiological and reporting details and the bias-related aspects of the reviews. The data were analyzed descriptively. In total 300 SRs were identified, suggesting a current annual publication rate of about 2,500, involving more than 33,700 separate studies including one-third of a million participants. The majority (272 [90.7%]) of SRs were reported in specialty journals. Most reviews (213 [71.0%]) were categorized as therapeutic, and included a median of 16 studies involving 1,112 participants. Funding sources were not reported in more than one-third (122 [40.7%]) of the reviews. Reviews typically searched a median of three electronic databases and two other sources, although only about two-thirds (208 [69.3%]) of them reported the years searched. Most (197/295 [66.8%]) reviews reported information about quality assessment, while few (68/294 [23.1%]) reported assessing for publication bias. A little over half (161/300 [53.7%]) of the SRs reported combining their results statistically, of which most (147/161 [91.3%]) assessed for consistency across studies. Few (53 [17.7%]) SRs reported being updates of previously completed reviews. No review had a registration number. Only half (150 [50.0%]) of the reviews used the term “systematic review” or “meta-analysis” in the title or abstract. There were large differences between Cochrane reviews and non-Cochrane reviews in the quality of reporting several characteristics.
Conclusions
SRs are now produced in large numbers, and our data suggest that the quality of their reporting is inconsistent. This situation might be improved if more widely agreed upon evidence-based reporting guidelines were endorsed and adhered to by authors and journals. These results substantiate the view that readers should not accept SRs uncritically.
Data were collected on the epidemiological, descriptive, and reporting characteristics of recent systematic reviews. A descriptive analysis found inconsistencies in the quality of reporting.
Editors' Summary
Background.
In health care it is important to assess all the evidence available about what causes a disease or the best way to prevent, diagnose, or treat it. Decisions should not be made simply on the basis of—for example—the latest or biggest research study, but after a full consideration of the findings from all the research of good quality that has so far been conducted on the issue in question. This approach is known as “evidence-based medicine” (EBM). A report that is based on a search for studies addressing a clearly defined question, a quality assessment of the studies found, and a synthesis of the research findings, is known as a systematic review (SR). Conducting an SR is itself regarded as a research project and the methods involved can be quite complex. In particular, as with other forms of research, it is important to do everything possible to reduce bias. The leading role in developing the SR concept and the methods that should be used has been played by an international network called the Cochrane Collaboration (see “Additional Information” below), which was launched in 1992. However, SRs are now becoming commonplace. Many articles published in journals and elsewhere are described as being systematic reviews.
Why Was This Study Done?
Since systematic reviews are claimed to be the best source of evidence, it is important that they should be well conducted and that bias should not have influenced the conclusions drawn in the review. Just because the authors of a paper that discusses evidence on a particular topic claim that they have done their review “systematically,” it does not guarantee that their methods have been sound and that their report is of good quality. However, if they have reported details of their methods, then it can help users of the review decide whether they are looking at a review with conclusions they can rely on. The authors of this PLoS Medicine article wanted to find out how many SRs are now being published, where they are being published, and what questions they are addressing. They also wanted to see how well the methods of SRs are being reported.
What Did the Researchers Do and Find?
They picked one month and looked for all the SRs added to the main list of medical literature in that month. They found 300, on a range of topics and in a variety of medical journals. They estimate that about 20% of reviews appearing each year are published by the Cochrane Collaboration. They found many cases in which important aspects of the methods used were not reported. For example, about a third of the SRs did not report how (if at all) the quality of the studies found in the search had been assessed. An important assessment, which analyzes for “publication bias,” was reported as having been done in only about a quarter of the cases. Most of the reporting failures were in the “non-Cochrane” reviews.
What Do These Findings Mean?
The authors concluded that the standards of reporting of SRs vary widely and that readers should, therefore, not accept the conclusions of SRs uncritically. To improve this situation, they urge that guidelines be drawn up regarding how SRs are reported. The writers of SRs and also the journals that publish them should follow these guidelines.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040078.
An editorial discussing this research article and its relevance to medical publishing appears in the same issue of PLoS Medicine
A good source of information on the evidence-based approach to medicine is the James Lind Library
The Web site of the Cochrane Collaboration is a good source of information on systematic reviews. In particular there is a newcomers' guide and information for health care “consumers”. From this Web site, it is also possible to see summaries of the SRs published by the Cochrane Collaboration (readers in some countries can also view the complete SRs free of charge)
Information on the practice of evidence-based medicine is available from the US Agency for Healthcare Research and Quality and the Canadian Agency for Drugs and Technologies in Health
doi:10.1371/journal.pmed.0040078
PMCID: PMC1831728  PMID: 17388659
22.  Comparative Performance of Private and Public Healthcare Systems in Low- and Middle-Income Countries: A Systematic Review 
PLoS Medicine  2012;9(6):e1001244.
A systematic review conducted by Sanjay Basu and colleagues reevaluates the evidence relating to comparative performance of public versus private sector healthcare delivery in low- and middle-income countries.
Introduction
Private sector healthcare delivery in low- and middle-income countries is sometimes argued to be more efficient, accountable, and sustainable than public sector delivery. Conversely, the public sector is often regarded as providing more equitable and evidence-based care. We performed a systematic review of research studies investigating the performance of private and public sector delivery in low- and middle-income countries.
Methods and Findings
Peer-reviewed studies including case studies, meta-analyses, reviews, and case-control analyses, as well as reports published by non-governmental organizations and international agencies, were systematically collected through large database searches, filtered through methodological inclusion criteria, and organized into six World Health Organization health system themes: accessibility and responsiveness; quality; outcomes; accountability, transparency, and regulation; fairness and equity; and efficiency. Of 1,178 potentially relevant unique citations, data were obtained from 102 articles describing studies conducted in low- and middle-income countries. Comparative cohort and cross-sectional studies suggested that providers in the private sector more frequently violated medical standards of practice and had poorer patient outcomes, but had greater reported timeliness and hospitality to patients. Reported efficiency tended to be lower in the private than in the public sector, resulting in part from perverse incentives for unnecessary testing and treatment. Public sector services experienced more limited availability of equipment, medications, and trained healthcare workers. When the definition of “private sector” included unlicensed and uncertified providers such as drug shop owners, most patients appeared to access care in the private sector; however, when unlicensed healthcare providers were excluded from the analysis, the majority of people accessed public sector care. “Competitive dynamics” for funding appeared between the two sectors, such that public funds and personnel were redirected to private sector development, followed by reductions in public sector service budgets and staff.
Conclusions
Studies evaluated in this systematic review do not support the claim that the private sector is usually more efficient, accountable, or medically effective than the public sector; however, the public sector appears frequently to lack timeliness and hospitality towards patients.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Health care can be provided through public and private providers. Public health care is usually provided by the government through national healthcare systems. Private health care can be provided through “for profit” hospitals and self-employed practitioners, and “not for profit” non-government providers, including faith-based organizations.
There is considerable ideological debate around whether low- and middle-income countries should strengthen public versus private healthcare services, but in reality, most low- and middle-income countries use both types of healthcare provision. Recently, as the global economic recession has put major constraints on government budgets—the major funding source for healthcare expenditures in most countries—disputes between the proponents of private and public systems have escalated, further fuelled by the recommendation of International Monetary Fund (an international finance institution) that countries increase the scope of private sector provision in health care as part of loan conditions to reduce government debt. However, critics of the private health sector believe that public healthcare provision is of most benefit to poor people and is the only way to achieve universal and equitable access to health care.
Why Was This Study Done?
Both sides of the public versus private healthcare debate draw on selected case reports to defend their viewpoints, but there is a widely held view that the private health system is more efficient than the public health system. Therefore, in order to inform policy, there is an urgent need for robust evidence to evaluate the quality and effectiveness of the health care provided through both systems. In this study, the authors reviewed all of the evidence in a systematic way to evaluate available data on public and private sector performance.
What Did the Researchers Do and Find?
The researchers used eight databases and a comprehensive key word search to identify and review appropriate published data and studies of private and public sector performance in low- and middle-income countries. They assessed selected studies against the World Health Organization's six essential themes of health systems—accessibility and responsiveness; quality; outcomes; accountability, transparency, and regulation; fairness and equity; and efficiency—and conducted a narrative review of each theme.
Out of the 102 relevant studies included in their comparative analysis, 59 studies were research studies and 13 involved meta-analysis, with the rest involving case reports or reviews. The researchers found that study findings varied considerably across countries studied (one-third of studies were conducted in Africa and a third in Southeast Asia) and by the methods used.
Financial barriers to care (such as user fees) were reported for both public and private systems. Although studies report that patients in the private sector experience better timeliness and hospitality, studies suggest that providers in the private sector more frequently violate accepted medical standards and have lower reported efficiency.
What Do These Findings Mean?
This systematic review did not support previous views that private sector delivery of health care in low- and middle-income settings is more efficient, accountable, or effective than public sector delivery. Each system has its strengths and weaknesses, but importantly, in both sectors, there were financial barriers to care, and each had poor accountability and transparency. This systematic review highlights a limited and poor-quality evidence base regarding the comparative performance of the two systems.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001244.
A previous PLoS Medicine study examined the outpatient care provided by the public and private sector in low-income countries
The WHO website provides more information on healthcare systems
The World Bank website provides information on health system financing
Oxfam provides an argument against increased private health care in poor countries
doi:10.1371/journal.pmed.1001244
PMCID: PMC3378609  PMID: 22723748
23.  Conflicts of interest in biomedical publications: considerations for authors, peer reviewers, and editors 
Croatian Medical Journal  2013;54(6):600-608.
This article overviews evidence on common instances of conflict of interest (COI) in research publications from general and specialized fields of biomedicine. Financial COIs are viewed as the most powerful source of bias, which may even distort citation outcomes of sponsored publications. The urge to boost journal citation indicators by stakeholders of science communication is viewed as a new secondary interest, which may compromize the interaction between authors, peer reviewers, and editors. Comprehensive policies on disclosure of financial and non-financial COIs in scholarly journals are presented as proxies of their indexing in evidence-based databases, and examples of successful medical journals are discussed in detail. Reports on clinical trials, systematic reviews, meta-analyses, and clinical practice guidelines may be unduly influenced by author-pharmaceutical industry relations, but these publications do not always contain explicit disclosures to allow the readers to judge the reliability of the published conclusions and practice-changing recommendations. The article emphasizes the importance of adhering to the guidance on COI from learned associations such as the International Committee of Medical Journal Editors (ICMJE). It also considers joint efforts of authors, peer reviewers, and editors as a foundation for appropriately defining and disclosing potential COIs.
doi:10.3325/cmj.2013.54.600
PMCID: PMC3893982  PMID: 24382859
24.  Minimally invasive surgical procedures for the treatment of lumbar disc herniation 
Introduction
In up to 30% of patients undergoing lumbar disc surgery for herniated or protruded discs outcomes are judged unfavourable. Over the last decades this problem has stimulated the development of a number of minimally-invasive operative procedures. The aim is to relieve pressure from compromised nerve roots by mechanically removing, dissolving or evaporating disc material while leaving bony structures and surrounding tissues as intact as possible. In Germany, there is hardly any utilisation data for these new procedures – data files from the statutory health insurances demonstrate that about 5% of all lumbar disc surgeries are performed using minimally-invasive techniques. Their real proportion is thought to be much higher because many procedures are offered by private hospitals and surgeries and are paid by private health insurers or patients themselves. So far no comprehensive assessment comparing efficacy, safety, effectiveness and cost-effectiveness of minimally-invasive lumbar disc surgery to standard procedures (microdiscectomy, open discectomy) which could serve as a basis for coverage decisions, has been published in Germany.
Objective
Against this background the aim of the following assessment is:
Based on published scientific literature assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery compared to standard procedures. To identify and critically appraise studies comparing costs and cost-effectiveness of minimally-invasive procedures to that of standard procedures. If necessary identify research and evaluation needs and point out regulative needs within the German health care system. The assessment focusses on procedures that are used in elective lumbar disc surgery as alternative treatment options to microdiscectomy or open discectomy. Chemonucleolysis, percutaneous manual discectomy, automated percutaneous lumbar discectomy, laserdiscectomy and endoscopic procedures accessing the disc by a posterolateral or posterior approach are included.
Methods
In order to assess safety, efficacy and effectiveness of minimally-invasive procedures as well as their economic implications systematic reviews of the literature are performed. A comprehensive search strategy is composed to search 23 electronic databases, among them MEDLINE, EMBASE and the Cochrane Library. Methodological quality of systematic reviews, HTA reports and primary research is assessed using checklists of the German Scientific Working Group for Health Technology Assessment. Quality and transparency of cost analyses are documented using the quality and transparency catalogues of the working group. Study results are summarised in a qualitative manner. Due to the limited number and the low methodological quality of the studies it is not possible to conduct metaanalyses. In addition to the results of controlled trials results of recent case series are introduced and discussed.
Results
The evidence-base to assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery procedures is rather limited:
Percutaneous manual discectomy: Six case series (four after 1998)Automated percutaneous lumbar discectomy: Two RCT (one discontinued), twelve case series (one after 1998)Chemonucleolysis: Five RCT, five non-randomised controlled trials, eleven case seriesPercutaneous laserdiscectomy: One non-randomised controlled trial, 13 case series (eight after 1998)Endoscopic procedures: Three RCT, 21 case series (17 after 1998)
There are two economic analyses each retrieved for chemonucleolysis and automated percutaneous discectomy as well as one cost-minimisation analysis comparing costs of an endoscopic procedure to costs for open discectomy.
Among all minimally-invasive procedures chemonucleolysis is the only of which efficacy may be judged on the basis of results from high quality randomised controlled trials (RCT). Study results suggest that the procedure maybe (cost)effectively used as an intermediate therapeutical option between conservative and operative management of small lumbar disc herniations or protrusions causing sciatica. Two RCT comparing transforaminal endoscopic procedures with microdiscectomy in patients with sciatica and small non-sequestered disc herniations show comparable short and medium term overall success rates. Concerning speed of recovery and return to work a trend towards more favourable results for the endoscopic procedures is noted. It is doubtful though, whether these results from the eleven and five years old studies are still valid for the more advanced procedures used today. The only RCT comparing the results of automated percutaneous lumbar discectomy to those of microdiscectomy showed clearly superior results of microdiscectomy. Furthermore, success rates of automated percutaneous lumbar discectomy reported in the RCT (29%) differ extremely from success rates reported in case series (between 56% and 92%).
The literature search retrieves no controlled trials to assess efficacy and/or effectiveness of laser-discectomy, percutaneous manual discectomy or endoscopic procedures using a posterior approach in comparison to the standard procedures. Results from recent case series permit no assessment of efficacy, especially not in comparison to standard procedures. Due to highly selected patients, modi-fications of operative procedures, highly specialised surgical units and poorly standardised outcome assessment results of case series are highly variable, their generalisability is low.
The results of the five economical analyses are, due to conceptual and methodological problems, of no value for decision-making in the context of the German health care system.
Discussion
Aside from low methodological study quality three conceptual problems complicate the interpretation of results.
Continuous further development of technologies leads to a diversity of procedures in use which prohibits generalisation of study results. However, diversity is noted not only for minimally-invasive procedures but also for the standard techniques against which the new developments are to be compared. The second problem refers to the heterogeneity of study populations. For most studies one common inclusion criterion was "persisting sciatica after a course of conservative treatment of variable duration". Differences among study populations are noted concerning results of imaging studies. Even within every group of minimally-invasive procedure, studies define their own in- and exclusion criteria which differ concerning degree of dislocation and sequestration of disc material. There is the non-standardised assessment of outcomes which are performed postoperatively after variable periods of time. Most studies report results in a dichotomous way as success or failure while the classification of a result is performed using a variety of different assessment instruments or procedures. Very often the global subjective judgement of results by patients or surgeons is reported. There are no scientific discussions whether these judgements are generalisable or comparable, especially among studies that are conducted under differing socio-cultural conditions. Taking into account the weak evidence-base for efficacy and effectiveness of minimally-invasive procedures it is not surprising that so far there are no dependable economic analyses.
Conclusions
Conclusions that can be drawn from the results of the present assessment refer in detail to the specified minimally-invasive procedures of lumbar disc surgery but they may also be considered exemplary for other fields where optimisation of results is attempted by technological development and widening of indications (e.g. total hip replacement).
Compared to standard technologies (open discectomy, microdiscectomy) and with the exception of chemonucleolysis, the developmental status of all other minimally-invasive procedures assessed must be termed experimental. To date there is no dependable evidence-base to recommend their use in routine clinical practice. To create such a dependable evidence-base further research in two directions is needed: a) The studies need to include adequate patient populations, use realistic controls (e.g. standard operative procedures or continued conservative care) and use standardised measurements of meaningful outcomes after adequate periods of time. b) Studies that are able to report effectiveness of the procedures under everyday practice conditions and furthermore have the potential to detect rare adverse effects are needed. In Sweden this type of data is yielded by national quality registries. On the one hand their data are used for quality improvement measures and on the other hand they allow comprehensive scientific evaluations. Since the year of 2000 a continuous rise in utilisation of minimally-invasive lumbar disc surgery is observed among statutory health insurers. Examples from other areas of innovative surgical technologies (e.g. robot assisted total hip replacement) indicate that the rise will probably continue - especially because there are no legal barriers to hinder introduction of innovative treatments into routine hospital care. Upon request by payers or providers the "Gemeinsamer Bundesausschuss" may assess a treatments benefit, its necessity and cost-effectiveness as a prerequisite for coverage by the statutory health insurance. In the case of minimally-invasive disc surgery it would be advisable to examine the legal framework for covering procedures only if they are provided under evaluation conditions. While in Germany coverage under evaluation conditions is established practice in ambulatory health care only (“Modellvorhaben") examples from other European countries (Great Britain, Switzerland) demonstrate that it is also feasible for hospital based interventions. In order to assure patient protection and at the same time not hinder the further development of new and promising technologies provision under evaluation conditions could also be realised in the private health care market - although in this sector coverage is not by law linked to benefit, necessity and cost-effectiveness of an intervention.
PMCID: PMC3011322  PMID: 21289928
25.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570

Results 1-25 (869825)