PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (693970)

Clipboard (0)
None

Related Articles

1.  Ghost Authorship in Industry-Initiated Randomised Trials 
PLoS Medicine  2007;4(1):e19.
Background
Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known.
Methods and Findings
We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors.
Conclusions
Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.
Of 44 industry-initiated trials, there was evidence of ghost authorship in 33, increasing to 40 when a person qualifying for authorship was acknowledged rather than appearing as an author.
Editors' Summary
Background.
Original scientific findings are usually published in the form of a “paper”, whether it is actually distributed on paper, or circulated via the internet, as this one is. Papers are normally prepared by a group of researchers who did the research and are then listed at the top of the article. These authors therefore take responsibility for the integrity of the results and interpretation of them. However, many people are worried that sometimes the author list on the paper does not tell the true story of who was involved. In particular, for clinical research, case histories and previous research has suggested that “ghost authorship” is commonplace. Ghost authors are people who were involved in some way in the research study, or writing the paper, but who have been left off the final author list. This might happen because the study “looks” more credible if the true authors (for example, company employees or freelance medical writers) are not revealed. This practice might hide competing interests that readers should be aware of, and has therefore been condemned by academics, groups of editors, and some pharmaceutical companies.
Why Was This Study Done?
This group of researchers wanted to get an idea of how often ghost authorship happened in medical research done by companies. Previous studies looking into this used surveys, whereby the researchers would write to one author on each of a group of papers to ask whether anyone else had been involved in the work but who was not listed on the paper. These sorts of studies typically underestimate the rate of ghost authorship, because the main author might not want to admit what had been going on. However, the researchers here managed to get access to trial protocols (documents setting out the plans for future research studies), which gave them a way to investigate ghost authorship.
What Did the Researchers Do and Find?
In order to investigate the frequency and type of ghost authorship, these researchers identified every trial which was approved between 1994 and 1995 by the ethics committees of Copenhagen and Frederiksberg in Denmark. Then they winnowed this group down to include only the trials that were sponsored by industry (pharmaceutical companies and others), and only those trials that were finished and published. The protocols for each trial were obtained from the ethics committees and the researchers then matched up each protocol with its corresponding paper. Then, they compared names which appeared in the protocol against names appearing on the eventual paper, either on the author list or acknowledged elsewhere in the paper as being involved. The researchers ended up studying 44 trials. For 31 of these (75% of them) they found some evidence of ghost authorship, in that people were identified as having written the protocol or who had been involved in doing statistical analyses or writing the manuscript, but did not end up listed in the manuscript. If the definition of authorship was made narrower, and “ghost authorship” included people qualifying for authorship who were mentioned in the acknowledgements but not the author list, the researchers' estimate went up to 91%, that is 40 of the 44 trials. For most of the trials with missing authors, the ghost was a statistician (the person who analyzes the trial data).
What Do These Findings Mean?
In this study, the researchers found that ghost authorship was very common in papers published in medical journals (this study covered a broad range of peer-reviewed journals in many medical disciplines). The method used in this paper seems more reliable than using surveys to work out how often ghost authorship happens. The researchers aimed to define authorship using the policies set out by a group called the International Committee of Medical Journal Editors (ICMJE), and the findings here suggest that the ICMJE's standards for authorship are very often ignored. This means that people who read the published paper cannot always accurately judge or trust the information presented within it, and competing interests may be hidden. The researchers here suggest that protocols should be made publicly available so that everyone can see what trials are planned and who is involved in conducting them. The findings also suggest that journals should not only list the authors of each paper but describe what each author has done, so that the published information accurately reflects what has been carried out.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040019.
Read the Perspective by Liz Wager, which discusses these findings in more depth
The International Committee of Medical Journal Editors (ICMJE) is a group of general medical journal editors who have produced general guidelines for biomedical manuscripts; their definition of authorship is also described
The Committee on Publication Ethics is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record; the Web site lists anonymized problems and the committee's advice, not just regarding authorship, but other types of problems as well
Good Publication Practice for Pharmaceutical Companies outlines common standards for publication of industry-sponsored medical research, and some pharmaceutical companies have agreed to these
doi:10.1371/journal.pmed.0040019
PMCID: PMC1769411  PMID: 17227134
2.  Selection in Reported Epidemiological Risks: An Empirical Assessment 
PLoS Medicine  2007;4(3):e79.
Background
Epidemiological studies may be subject to selective reporting, but empirical evidence thereof is limited. We empirically evaluated the extent of selection of significant results and large effect sizes in a large sample of recent articles.
Methods and Findings
We evaluated 389 articles of epidemiological studies that reported, in their respective abstracts, at least one relative risk for a continuous risk factor in contrasts based on median, tertile, quartile, or quintile categorizations. We examined the proportion and correlates of reporting statistically significant and nonsignificant results in the abstract and whether the magnitude of the relative risks presented (coined to be consistently ≥1.00) differs depending on the type of contrast used for the risk factor. In 342 articles (87.9%), ≥1 statistically significant relative risk was reported in the abstract, while only 169 articles (43.4%) reported ≥1 statistically nonsignificant relative risk in the abstract. Reporting of statistically significant results was more common with structured abstracts, and was less common in US-based studies and in cancer outcomes. Among 50 randomly selected articles in which the full text was examined, a median of nine (interquartile range 5–16) statistically significant and six (interquartile range 3–16) statistically nonsignificant relative risks were presented (p = 0.25). Paradoxically, the smallest presented relative risks were based on the contrasts of extreme quintiles; on average, the relative risk magnitude was 1.41-, 1.42-, and 1.36-fold larger in contrasts of extreme quartiles, extreme tertiles, and above-versus-below median values, respectively (p < 0.001).
Conclusions
Published epidemiological investigations almost universally highlight significant associations between risk factors and outcomes. For continuous risk factors, investigators selectively present contrasts between more extreme groups, when relative risks are inherently lower.
An evaluation of published articles reporting epidemiological studies found that they almost universally highlight significant associations between risk factors and outcomes.
Editors' Summary
Background.
Medical and scientific researchers use statistical tests to try to work out whether their observations—for example, seeing a difference in some characteristic between two groups of people—might have occurred as a result of chance alone. Statistical tests cannot determine this for sure, rather they can only give a probability that the observations would have arisen by chance. When researchers have many different hypotheses, and carry out many statistical tests on the same set of data, they run the risk of concluding that there are real differences where in fact there are none. At the same time, it has long been known that scientific and medical researchers tend to pick out the findings on which to report in their papers. Findings that are more interesting, impressive, or statistically significant are more likely to be published. This is termed “publication bias” or “selective reporting bias.” Therefore, some people are concerned that the published scientific literature might contain many false-positive findings, i.e., findings that are not true but are simply the result of chance variation in the data. This would have a serious impact on the accuracy of the published scientific literature and would tend to overestimate the strength and direction of relationships being studied.
Why Was This Study Done?
Selective reporting bias has already been studied in detail in the area of randomized trials (studies where participants are randomly allocated to receive an intervention, e.g., a new drug, versus an alternative intervention or “comparator,” in order to understand the benefits or safety of the new intervention). These studies have shown that very many of the findings of trials are never published, and that statistically significant findings are more likely to be included in published papers than nonsignificant findings. However, much medical research is carried out that does not use randomized trial methods, either because that method is not useful to answer the question at hand or is unethical. Epidemiological research is often concerned with looking at links between risk factors and the development of disease, and this type of research would generally use observation rather than experiment to uncover connections. The researchers here were concerned that selective reporting bias might be just as much of a problem in epidemiological research as in randomized trials research, and wanted to study this specifically.
What Did the Researchers Do and Find?
In this investigation, searches were carried out of PubMed, a database of biomedical research studies, to extract epidemiological studies that were published between January 2004 and October 2005. The researchers wanted to specifically look at studies reporting the effect of continuous risk factors and their effect on health or disease outcomes (a continuous risk factor is something like age or glucose concentration in the blood, is a number, and can have any value on a sliding scale). Three hundred and eighty-nine original research studies were found, and the researchers pulled out from the abstracts and full text of these papers the relative risks that were reported along with the results of statistical tests for them. (Relative risk is the chance of getting an outcome, say disease, in one group as compared to another group.) The researchers found that nearly 90% of these studies had one or more statistically significant risks reported in the abstract, but only 43% reported one or more risks that were not statistically significant. When looking at all of the findings reported anywhere in the full text for 50 of these studies, the researchers saw that papers overall reported more statistically significant risks than nonsignificant risks. Finally, it seemed that in the set of papers studied here, the way in which statistical analyses were done produced a bias towards more extreme findings: for datasets showing small relative risks, papers were more likely to report a comparison between extreme subsets of the data so as to report larger relative risks.
What Do These Findings Mean?
These findings suggest that there is a tendency among epidemiology researchers to highlight statistically significant findings and to avoid highlighting nonsignificant findings in their research papers. This behavior may be a problem, because many of these significant findings could in future turn out to be “false positives.” At present, registers exist for researchers to describe ongoing clinical trials, and to set out the outcomes that they plan to analyze for those trials. These registers will go some way towards addressing some of the problems described here, but only for clinical trials research. Registers do not yet exist for epidemiological studies, and therefore it is important that researchers and readers are aware of and cautious about the problem of selective reporting in epidemiological research.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040079.
Wikipedia entry on publication bias (note: Wikipedia is an internet encyclopedia that anyone can edit)
The International Committee of Medical Journal Editors gives guidelines for submitting manuscripts to its member journals, and includes comments about registration of ongoing studies and the obligation to publish negative studies
ClinicalTrials.gov and the ISRCTN register are two registries of ongoing clinical trials
doi:10.1371/journal.pmed.0040079
PMCID: PMC1808481  PMID: 17341129
3.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
4.  The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality 
PLoS Medicine  2007;4(1):e40.
Background
Peer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.
Methods and Findings
306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.
Conclusions
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
A survey of experienced reviewers, asked about training they had received in peer review, found there are no easily identifiable types of formal training and experience that predict reviewer performance.
Editors' Summary
Background.
When medical researchers have concluded their research and written it up, the next step is to get it published as an article in a journal, so that the findings can be circulated widely. These published findings help determine subsequent research and clinical use. The editors of reputable journals, including PLoS Medicine, have to decide whether the articles sent to them are of good quality and accurate and whether they will be of interest to the readers of their journal. To do this they need to obtain specialist advice, so they contact experts in the topic of the research article and ask them to write reports. This is the process of scientific peer review, and the experts who write such reports are known as “peer reviewers.” Although the editors make the final decision, the advice and criticism of these peer reviewers to the editors is essential in making decisions on publication, and usually in requiring authors to make changes to their manuscript. The contribution that peer reviewers have made to the article by the time it is finally published may, therefore, be quite considerable.
Although peer review is accepted as a key part of the process for the publishing of medical research, many people have argued that there are flaws in the system. For example, there may be an element of luck involved; one author might find their paper being reviewed by a reviewer who is biased against the approach they have adopted or who is a very critical person by nature, and another author may have the good fortune to have their work considered by someone who is much more favorably disposed toward their work. Some reviewers are more knowledgeable and thorough in their work than others. The editors of medical journals try to take in account such biases and quality factors in their choice of peer reviewers or when assessing the reviews. Some journals have run training courses for experts who review for them regularly to try to make the standard of peer review as high as possible.
Why Was This Study Done?
It is hard for journal editors to know who will make a good peer reviewer, and there is no proven system for choosing them. The authors of this study wanted to identify the previous experiences and training that make up the background of good peer reviewers and compare them with the quality of the reviews provided. This would help journal editors select good people for the task in future, and as a result will affect the quality of science they publish for readers, including other researchers.
What Did the Researchers Do and Find?
The authors contacted all the regular reviewers from one specialist journal (Annals of Emergency Medicine). A total of 306 of these experienced reviewers (71% of all those associated with the journal) completed a survey of past training and experiences that might be expected to improve peer review skills. These reviewers had done 2,856 reviews of 1,484 separate manuscripts during a four-year study period, and during this time the quality of the reviews had been rated by the journal's editors. Surprisingly, most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training), and even these were only weak predictors.
What Do These Findings Mean?
This study suggest that there are no easily identifiable types of formal training or experience that predict peer reviewer performance, although it is clear that some reviewers (and reviews) are better than others. The authors suggest that it is essential therefore that journals routinely monitor the quality of reviews submitted to them to ensure they are getting good advice (a practice that is not universal).
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0040040
• WAME is an association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals
• The Fifth International Congress on Peer Review and Biomedical Publication is one of a series of conferences on peer review
• The PLoS Medicine guidelines for reviewers outline what we look for in a review
• The Council of Science Editors promotes ethical scientific publishing practices
• An editorial also published in this issue of PLoS Medicine discusses the peer review process further
doi:10.1371/journal.pmed.0040040
PMCID: PMC1796627  PMID: 17411314
5.  Conflict of Interest Reporting by Authors Involved in Promotion of Off-Label Drug Use: An Analysis of Journal Disclosures 
PLoS Medicine  2012;9(8):e1001280.
Aaron Kesselheim and colleagues investigate conflict of interest disclosures in articles authored by physicians and scientists identified in whistleblower complaints alleging illegal off-label marketing by pharmaceutical companies.
Background
Litigation documents reveal that pharmaceutical companies have paid physicians to promote off-label uses of their products through a number of different avenues. It is unknown whether physicians and scientists who have such conflicts of interest adequately disclose such relationships in the scientific publications they author.
Methods and Findings
We collected whistleblower complaints alleging illegal off-label marketing from the US Department of Justice and other publicly available sources (date range: 1996–2010). We identified physicians and scientists described in the complaints as having financial relationships with defendant manufacturers, then searched Medline for articles they authored in the subsequent three years. We assessed disclosures made in articles related to the off-label use in question, determined the frequency of adequate disclosure statements, and analyzed characteristics of the authors (specialty, author position) and articles (type, connection to off-label use, journal impact factor, citation count/year). We identified 39 conflicted individuals in whistleblower complaints. They published 404 articles related to the drugs at issue in the whistleblower complaints, only 62 (15%) of which contained an adequate disclosure statement. Most articles had no disclosure (43%) or did not mention the pharmaceutical company (40%). Adequate disclosure rates varied significantly by article type, with commentaries less likely to have adequate disclosure compared to articles reporting original studies or trials (adjusted odds ratio [OR] = 0.10, 95%CI = 0.02–0.67, p = 0.02). Over half of the authors (22/39, 56%) made no adequate disclosures in their articles. However, four of six authors with ≥25 articles disclosed in about one-third of articles (range: 10/36–8/25 [28%–32%]).
Conclusions
One in seven authors identified in whistleblower complaints as involved in off-label marketing activities adequately disclosed their conflict of interest in subsequent journal publications. This is a much lower rate of adequate disclosure than has been identified in previous studies. The non-disclosure patterns suggest shortcomings with authors and the rigor of journal practices.
Please see later in the article for the Editors' Summary
Editor's Summary
Background
Off-label use of pharmaceuticals is the practice of prescribing a drug for a condition or age group, or in a dose or form of administration, that has not been specifically approved by a formal regulatory body, such as the US Food and Drug Administration (FDA). Off-label prescribing is common all over the world. In the US, although it is legal for doctors to prescribe drugs off-label and discuss such clinical uses with colleagues, it is illegal for pharmaceutical companies to directly promote off-label uses of any of their products. Revenue from off-label uses can be lucrative for drug companies and even surpass the income from approved uses. Therefore, many pharmaceutical companies have paid physicians and scientists to promote off-label use of their products as part of their marketing programs.
Why Was This Study Done?
Recently, a number of pharmaceutical companies have been investigated in the US for illegal marketing programs that promote off-label uses of their products and have had to pay billions of dollars in court settlements. As part of these investigations, doctors and scientists were identified who were paid by the companies to deliver lectures and conduct other activities to support off-label uses. When the same physicians and scientists also wrote articles about these drugs for medical journals, their financial relationships would have constituted clear conflicts of interest that should have been declared alongside the journal articles. So, in this study, the researchers identified such authors, examined their publications, and assessed the adequacy of conflict of interest disclosures made in these publications.
What Did the Researchers Do and Find?
The researchers used disclosed information from the US Department of Justice, media reports, and data from a non-governmental organization that tracks federal fraud actions, to find whistleblower complaints alleging illegal off-label promotion. Then they identified the doctors and scientists described in the complaints as having financial relationships with the defendant drug companies and searched Medline for articles authored by these experts in the subsequent three years. Using a four step approach, the researchers assessed the adequacy of conflict of interest disclosures made in articles relating to the off-label uses in question.
Using these methods, the researchers examined 26 complaints alleging illegal off-label promotion and identified the 91 doctors and scientists recorded as being involved in this practice. The researchers found 39 (43%) of these 91 experts had authored 404 related publications. In the complaints, these 39 experts were alleged to have engaged in 42 relationships with the relevant drug company: the most common activity was acting as a paid speaker (n = 26, 62%) but also writing reviews or articles on behalf of the company (n = 7), acting as consultants or advisory board members (n = 3), and receiving gifts/honoraria (n = 3), research support funds (n = 2), and educational support funds (n = 1). However, the researchers found that only 62 (15%) of the 404 related articles had adequate disclosures—43% (148) had no disclosure at all, 4% had statements denying any conflicts of interest, 40% had disclosures that did not mention the drug manufacturer, and 13% had disclosures that mentioned the manufacturer but inadequately conveyed the nature of the relationship between author and drug manufacturer reported in the complaint. The researchers also found that adequate disclosure rates varied significantly by article type, with commentaries significantly less likely to have adequate disclosure compared to articles reporting studies or trials.
What Do These Findings Mean?
These findings show the substantial deficiencies in the adequacy of conflict-of-interest disclosures made by authors who had been paid by pharmaceutical manufacturers as part of off-label marketing activities: only one in seven authors fully disclosed their conflict of interest in their published articles. This low figure is troubling and suggests that approaches to controlling the effects of conflicts of interest that rely on author candidness are inadequate and furthermore, journal practices are not robust enough and need to be improved. In the meantime, readers have no option but to interpret conflict of interest disclosures, particularly in relation to off-label uses, with caution.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001280.
The US FDA provides a guide on the use of off-label drugs
The US Agency for Healthcare Research and Quality offers a patient guide to off-label drugs
ProPublica offers a web-based tool to identify physicians who have financial relationships with certain pharmaceutical companies
Wikipedia has a good description of off-label drug use (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The Institute for Medicine as a Profession maintains a list of policies regulating physicians' financial relationships that are in place at US-based academic medical centers
doi:10.1371/journal.pmed.1001280
PMCID: PMC3413710  PMID: 22899894
6.  Promotional Tone in Reviews of Menopausal Hormone Therapy After the Women's Health Initiative: An Analysis of Published Articles 
PLoS Medicine  2011;8(3):e1000425.
Adriane Fugh-Berman and colleagues analyzed a selection of published opinion pieces on hormone therapy and show that there may be a connection between receiving industry funding for speaking, consulting, or research and the tone of such opinion pieces.
Background
Even after the Women's Health Initiative (WHI) found that the risks of menopausal hormone therapy (hormone therapy) outweighed benefit for asymptomatic women, about half of gynecologists in the United States continued to believe that hormones benefited women's health. The pharmaceutical industry has supported publication of articles in medical journals for marketing purposes. It is unknown whether author relationships with industry affect promotional tone in articles on hormone therapy. The goal of this study was to determine whether promotional tone could be identified in narrative review articles regarding menopausal hormone therapy and whether articles identified as promotional were more likely to have been authored by those with conflicts of interest with manufacturers of menopausal hormone therapy.
Methods and Findings
We analyzed tone in opinion pieces on hormone therapy published in the four years after the estrogen-progestin arm of the WHI was stopped. First, we identified the ten authors with four or more MEDLINE-indexed reviews, editorials, comments, or letters on hormone replacement therapy or menopausal hormone therapy published between July 2002 and June 2006. Next, we conducted an additional search using the names of these authors to identify other relevant articles. Finally, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. Scientific accuracy was assessed based on whether or not the findings of the WHI were accurately reported using two criteria: (1) Acknowledgment or lack of denial of the risk of breast cancer diagnosis associated with hormone therapy, and (2) acknowledgment that hormone therapy did not benefit cardiovascular disease endpoints. Determination of promotional tone was based on the assessment by each reader of whether the article appeared to promote hormone therapy. Analysis of inter-rater consistency found moderate agreement for scientific accuracy (κ = 0.57) and substantial agreement for promotional tone (κ = 0.65). After discussion, readers found 86% of the articles to be scientifically accurate and 64% to be promotional in tone. Themes that were common in articles considered promotional included attacks on the methodology of the WHI, arguments that clinical trial results should not guide treatment for individuals, and arguments that observational studies are as good as or better than randomized clinical trials for guiding clinical decisions. The promotional articles we identified also implied that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Of the ten authors studied, eight were found to have declared payment for speaking or consulting on behalf of menopausal hormone manufacturers or for research support (seven of these eight were speakers or consultants). Thirty of 32 articles (90%) evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest, compared to 11 of 18 articles (61%) by those without such conflicts (p = 0.0025). Articles promoting the use of menopausal hormone therapy were 2.41 times (95% confidence interval 1.49–4.93) as likely to have been authored by authors with conflicts of interest as by authors without conflicts of interest. In articles from three authors with conflicts of interest some of the same text was repeated word-for-word in different articles.
Conclusion
There may be a connection between receiving industry funding for speaking, consulting, or research and the publication of promotional opinion pieces on menopausal hormone therapy.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Over the past three decades, menopausal hormones have been heavily promoted for preventing disease in women. However, the Women's Health Initiative (WHI) study—which enrolled more than 26,000 women in the US and which was published in 2004—found that estrogen-progestin and estrogen-only formulations (often prescribed to women around the age of menopause) increased the risk of stroke, deep vein thrombosis, dementia, and incontinence. Furthermore, this study found that the estrogen-progestin therapy increased rates of breast cancer. In fact, the estrogen-progestin arm of the WHI study was stopped in 2002 due to harmful findings, and the estrogen-only arm was stopped in 2004, also because of harmful findings. In addition, the study also found that neither therapy reduced cardiovascular risk or markedly benefited health-related quality of life measures.
Despite these results, two years after the results of WHI study were published, a survey of over 700 practicing gynecologists—the specialists who prescribe the majority of menopausal hormone therapies—in the US found that almost half did not find the findings of the WHI study convincing and that 48% disagreed with the decision to stop the trial early. Furthermore, follow-up surveys found similar results.
Why Was This Study Done?
It is unclear why gynecologists and other physicians continue to prescribe menopausal hormone therapies despite the results of the WHI. Some academics argue that published industry-funded reviews and commentaries may be designed to convey specific, but subtle, marketing messages and several academic analyses have used internal industry documents disclosed in litigation cases. So this study was conducted to investigate whether hormone therapy–promoting tone could be identified in narrative review articles and if so, whether these articles were more likely to have been authored by people who had accepted funding from hormone manufacturers.
What Did the Researchers Do and Find?
The researchers conducted a comprehensive literature search that identified 340 relevant articles published between July 2002 and June 2006—the four years following the cessation of the estrogen-progestin arm of the women's health initiative study. Ten authors had published four to six articles, 47 authored two or three articles, and 371 authored one article each. The researchers focused on authors who had published four or more articles in the four-year period under study and, after author names and affiliations were removed, 50 articles were evaluated by three readers for scientific accuracy and for tone. After individually analyzing a batch of articles, the readers met to provide their initial assessments, to discuss them, and to reach consensus on tone and scientific accuracy. Then after the papers were evaluated, each author was identified and the researchers searched for authors' potential financial conflicts of interest, defined as publicly disclosed information that the authors had received payment for research, speaking, or consulting on behalf of a manufacturer of menopausal hormone therapy.
Common themes in the 50 articles included arguments that clinical trial results should not guide treatment for individuals and suggestions that the risks associated with hormone therapy have been exaggerated and that the benefits of hormone therapy have been or will be proven. Furthermore, of the ten authors studied, eight were found to have received payment for research, speaking or consulting on behalf of menopause hormone manufacturers, and 30 of 32 articles evaluated as promoting hormone therapy were authored by those with potential financial conflicts of interest. Articles promoting the use of menopausal hormone therapy were more than twice as likely to have been written by authors with conflicts of interest as by authors without conflicts of interest. Furthermore, Three authors who were identified as having financial conflicts of interest were authors on articles where sections of their previously published articles were repeated word-for-word without citation.
What Do These Findings Mean?
The findings of this study suggest that there may be a link between receiving industry funding for speaking, consulting, or research and the publication of apparently promotional opinion pieces on menopausal hormone therapy. Furthermore, such publications may encourage physicians to continue prescribing these therapies to women of menopausal age. Therefore, physicians and other health care providers should interpret the content of review articles with caution. In addition, medical journals should follow the International Committee of Medical Journal Editors Uniform Requirements for Manuscripts, which require that all authors submit signed statements of their participation in authorship and full disclosure of any conflicts of interest.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000425.
The US National Heart, Lung, and Blood Institute has more information on the Womens Health Initiative
The US National Institutes of Health provide more information about the effects of menopausal hormone replacement therapy
The Office of Women's Health, U.S. Department of Health and Human Services provides information on menopausal hormone therapy
The International Committee of Medical Journal Editors Uniform Requirements for Manuscripts presents Uniform Requirements for Manuscripts published in biomedical journals
The National Womens Health Network, a consumer advocacy group that takes no industry money, has factsheets and articles about menopausal hormone therapy
PharmedOut, a Georgetown University Medical Center project, has many resources on pharmaceutical marketing practices
doi:10.1371/journal.pmed.1000425
PMCID: PMC3058057  PMID: 21423581
7.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
8.  Plagiarism in Scientific Research and Publications and How to Prevent It 
Materia Socio-Medica  2014;26(2):141-146.
Quality is assessed on the basis of adequate evidence, while best results of the research are accomplished through scientific knowledge. Information contained in a scientific work must always be based on scientific evidence. Guidelines for genuine scientific research should be designed based on real results. Dynamic research and use correct methods of scientific work must originate from everyday practice and the fundamentals of the research. The original work should have the proper data sources with clearly defined research goals, methods of operation which are acceptable for questions included in the study. When selecting the methods it is necessary to obtain the consent of the patients/respondents to provide data for execution of the project or so called informed consent. Only by the own efforts can be reached true results, from which can be drawn conclusions and which finally can give a valid scholarly commentary. Text may be copied from other sources, either in whole or in part and marked as a result of the other studies. For high-quality scientific work necessary are expertise and relevant scientific literature, mostly taken from publications that are stored in biomedical databases. These are scientific, professional and review articles, case reports of disease in physician practices, but the knowledge can also be acquired on scientific and expert lectures by renowned scientists. Form of text publications must meet standards on writing a paper. If the article has already been published in a scientific journal, the same article cannot be published in any other journal with a few minor adjustments, or without specifying the parts of the first article which is used in another article. Copyright infringement occurs when the author of a new article, with or without mentioning the author, uses a substantial portion of previously published articles, including past contributions in the first article. With the permission of the publisher and the author, another journal can re-publish the article already published. In that case, that is not plagiarism, because the journal states that the article was re-published with the permission of the journal in which the article is primarily released. The original can be only one, and the copy is a copy, and plagiarism is stolen copy. The aim of combating plagiarism is to improve the quality, to achieve satisfactory results and to compare the results of their own research, rather than copying the data from the results of other people's research. Copy leads to incorrect results. Nowadays the problem of plagiarism has become huge, or widespread and present in almost all spheres of human activity, particularly in science.
Scientific institutions and universities should have a center for surveillance, security, promotion and development of quality research. Establishment of rules and respect the rules of good practice are the obligations of each research institutions, universities and every individual researchers, regardless of which area of science is being investigated. There are misunderstandings and doubts about the criteria and standards for when and how to declare someone a plagiarist. European and World Association of Science Editors (EASE and WAME), and COPE - Committee on Publishing Ethics working on the precise definition of that institution or that the scientific committee may sanction when someone is proven plagiarism and familiarize the authors with the types of sanctions. The practice is to inform the editors about discovered plagiarism and articles are withdrawn from the database, while the authors are put on the so-called black list. So far this is the only way of preventing plagiarism, because there are no other sanctions.
doi:10.5455/msm.2014.26.141-146
PMCID: PMC4035147  PMID: 24944543
scientific research; ethics; citing; plagiarism
9.  The Toxic Effects of Cigarette Additives. Philip Morris' Project Mix Reconsidered: An Analysis of Documents Released through Litigation 
PLoS Medicine  2011;8(12):e1001145.
Stanton Glantz and colleagues analyzed previously secret tobacco industry documents and peer-reviewed published results of Philip Morris' Project MIX about research on cigarette additives, and show that this research on the use of cigarette additives cannot be taken at face value.
Background
In 2009, the promulgation of US Food and Drug Administration (FDA) tobacco regulation focused attention on cigarette flavor additives. The tobacco industry had prepared for this eventuality by initiating a research program focusing on additive toxicity. The objective of this study was to analyze Philip Morris' Project MIX as a case study of tobacco industry scientific research being positioned strategically to prevent anticipated tobacco control regulations.
Methods and Findings
We analyzed previously secret tobacco industry documents to identify internal strategies for research on cigarette additives and reanalyzed tobacco industry peer-reviewed published results of this research. We focused on the key group of studies conducted by Phillip Morris in a coordinated effort known as “Project MIX.” Documents showed that Project MIX subsumed the study of various combinations of 333 cigarette additives. In addition to multiple internal reports, this work also led to four peer-reviewed publications (published in 2001). These papers concluded that there was no evidence of substantial toxicity attributable to the cigarette additives studied. Internal documents revealed post hoc changes in analytical protocols after initial statistical findings indicated an additive-associated increase in cigarette toxicity as well as increased total particulate matter (TPM) concentrations in additive-modified cigarette smoke. By expressing the data adjusted by TPM concentration, the published papers obscured this underlying toxicity and particulate increase. The animal toxicology results were based on a small number of rats in each experiment, raising the possibility that the failure to detect statistically significant changes in the end points was due to underpowering the experiments rather than lack of a real effect.
Conclusion
The case study of Project MIX shows tobacco industry scientific research on the use of cigarette additives cannot be taken at face value. The results demonstrate that toxins in cigarette smoke increase substantially when additives are put in cigarettes, including the level of TPM. In particular, regulatory authorities, including the FDA and similar agencies elsewhere, could use the Project MIX data to eliminate the use of these 333 additives (including menthol) from cigarettes.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The tobacco industry in the United States has recognized that regulation of its products was inevitable as early as 1963 and devoted increasing attention to the likelihood of regulation by the US Food and Drug Administration in the mid-1990s, which finally became law in 2009. In addition, the World Health Organization (WHO) Framework Convention on Tobacco Control (WHO FCTC), which came into force in June 2003, includes provisions addressing the regulation of the contents of tobacco products and the regulation of tobacco product disclosures. Although these steps represent progress in tobacco control, the events of the past few decades show the determination of the tobacco industry to avoid regulation, including the regulation of additives. In the United States, executives of the tobacco company Philip Morris (PM) recognized the inevitability of regulation and responded by initiating efforts to shape legislation and regulation by reorganizing its internal scientific activities and conducting scientific research that could be used to shape any proposed regulations. For example, the company conducted “Project MIX,” a study of chemical constituents in and toxicity of smoke produced by burning cigarettes containing three different combinations of 333 cigarette additives that “were constructed to resemble typical commercial blended cigarettes.” The resulting four papers published in Food and Chemical Toxicology in January 2002 concluded that there was no evidence of substantial toxicity attributable to the cigarette additives studied.
Why Was This Study Done?
The use of cigarette additives is an important concern of the WHO, FDA, and similar national regulatory bodies around the world. Philip Morris has used the published Project MIX papers to assert the safety of individual additives and other cigarette companies have done similar studies that reached similar conclusions. In this study, the researchers used documents made public as a result of litigation against the tobacco industry to investigate the origins and design of Project MIX and to conduct their own analyses of the results to assess the reliability of the conclusions in the papers published in Food and Chemical Toxicology.
What Did the Researchers Do and Find?
The researchers systematically examined tobacco industry documents in the University of California San Francisco Legacy Tobacco Documents Library (then about 60 million pages made publicly available as a result of litigation) and used an iterative process of searching, analyzing, and refining to identify and review in detail 500 relevant documents.
The researchers found that in the original Project MIX analysis, the published papers obscured findings of toxicity by adjusting the data by total particulate matter (TPM) concentration. When the researchers conducted their own analysis by studying additives per cigarette (as was specified in the original Project MIX protocol), they found that 15 carcinogenic chemicals increased by 20%. The researchers also reported that, for unexplained reasons, Philip Morris deemphasized 19 of the 51 chemicals tested in the presentation of results, including nine that were substantially increased in smoke on a per cigarette basis of additive-added cigarettes, compared to smoke of control cigarettes.
The researchers explored the possibility that the failure of Project MIX to detect statistically significant changes in the toxicity of the smoke from cigarettes containing the additives was due to underpowered experiments rather than lack of a real effect by conducting their own statistical analysis. This analysis suggests that a better powered study would have detected a much broader range of biological effects associated with the additives than was identified in Philip Morris' published paper, suggesting that it substantially underestimated the toxic potential of cigarette smoke and additives.
The researchers also found that Food and Chemical Toxicology, the journal in which the four Project MIX papers were published, had an editor and 11 of its International Editorial Board with documented links to the tobacco industry. The scientist and leader of Project MIX Edward Carmines described the process of publication as “an inside job.”
What Do These Findings Mean?
These findings show that the tobacco industry scientific research on the use of cigarette additives cannot be taken at face value: the results demonstrate that toxins in cigarette smoke increase substantially when additives are put in cigarettes. In addition, better powered studies would probably have detected a much broader range of adverse biological effects associated with the additives than identified to those identified in PM's published papers suggesting that the published papers substantially underestimate the toxic potential combination of cigarette smoke and additives.
Regulatory authorities, including the FDA and similar agencies elsewhere who are implementing WHO FCTC, should conduct their own independent analysis of Project MIX data, which, analyzed correctly, could provide a strong evidence base for the elimination of the use of the studied additives (including menthol) in cigarettes on public health grounds.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001145.
For PLoS Medicine's own policy on publishing papers sponsored by the tobacco industry see http://www.plosmedicine.org/static/policies.action#funders
The World Health Organization (WHO) provides information on the Framework Convention on Tobacco Control (FCTC)
The documents that the researchers reviewed in this paper can be found at the Legacy Tobacco Documents Library
doi:10.1371/journal.pmed.1001145
PMCID: PMC3243707  PMID: 22205885
10.  Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin 
PLoS Medicine  2013;10(1):e1001378.
Using documents obtained through litigation, S. Swaroop Vedula and colleagues compared internal company documents regarding industry-sponsored trials of off-label uses of gabapentin with the published trial reports and find discrepancies in reporting of analyses.
Background
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Conclusions
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
To be credible, published research must present an unbiased, transparent, and accurate description of the study methods and findings so that readers can assess all relevant information to make informed decisions about the impact of any conclusions. Therefore, research publications should conform to universally adopted guidelines and checklists. Studies to establish whether a treatment is effective, termed randomized controlled trials (RCTs), are checked against a comprehensive set of guidelines: The robustness of trial protocols are measured through the Standard Protocol Items for Randomized Trials (SPIRIT), and the Consolidated Standards of Reporting Trials (CONSORT) statement (which was constructed and agreed by a meeting of journal editors in 1996, and has been updated over the years) includes a 25-point checklist that covers all of the key points in reporting RCTs.
Why Was This Study Done?
Although the CONSORT statement has helped improve transparency in the reporting of the methods and findings from RCTs, the statement does not define how certain types of analyses should be conducted and which patients should be included in the analyses, for example, in an intention-to-treat analysis (in which all participants are included in the data analysis of the group to which they were assigned, whether or not they completed the intervention given to the group). So in this study, the researchers used internal company documents released in the course of litigation against the pharmaceutical company Pfizer regarding the drug gabapentin, to compare between the internal and published reports the reporting of the numbers of participants, the description of the types of analyses, and the definitions of each type of analysis. The reports involved studies of gabapentin used for medical reasons not approved for marketing by the US Food and Drug Administration, known as “off-label” uses.
What Did the Researchers Do and Find?
The researchers identified trials sponsored by Pfizer relating to four off-label uses of gabapentin and examined the internal company protocols, statistical analysis plans, research reports, and the main publications related to each trial. The researchers then compared the numbers of participants randomized and analyzed for the main (primary) outcome and the type of analysis for efficacy and safety in both the internal research report and the trial publication. The researchers identified 21 trials, 11 of which were published RCTs that had the associated documents necessary for comparison.
The researchers found that in three out of ten trials there were differences in the internal research report and the main publication regarding the number of randomized participants. Furthermore, in six out of ten trials, the researchers were unable to compare the internal research report with the main publication for the number of participants analyzed for efficacy, because the research report either did not describe the primary outcome or did not describe the type of analysis. Overall, the researchers found that seven different types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including intention-to-treat analysis. However, the protocol or publication used six different descriptions for the intention-to-treat analysis, resulting in several important differences between the internal and published documents about the number of patients included in the analysis.
What Do These Findings Mean?
These findings from a sample of industry-sponsored trials on the off-label use of gabapentin suggest that when compared to the internal research reports, the trial publications did not always accurately reflect what was actually done in the trial. Therefore, the trial publication could not be considered to be an accurate and transparent record of the numbers of participants randomized and analyzed for efficacy. These findings support the need for further revisions of the CONSORT statement, such as including explicit statements about the criteria used to define each type of analysis and the numbers of participants excluded from each type of analysis. Further guidance is also needed to ensure consistent terminology for types of analysis. Of course, these revisions will improve reporting only if authors and journals adhere to them. These findings also highlight the need for all individual patient data to be made accessible to readers of the published article.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001378.
For more information, see the CONSORT statement website
The EQUATOR Network website is a resource center for the good reporting of health research studies and has more information about the SPIRIT initiative and the CONSORT statement
doi:10.1371/journal.pmed.1001378
PMCID: PMC3558476  PMID: 23382656
11.  The Importance of Proper Citation of References in Biomedical Articles 
Acta Informatica Medica  2013;21(3):148-155.
In scientific circles, the reference is the information that is necessary to the reader in identifying and finding used sources. The basic rule when listing the sources used is that references must be accurate, complete and should be consistently applied. On the other hand, quoting implies verbatim written or verbal repetition of parts of the text or words written by others that can be checked in original. Authors of every new scientific article need to explain how their study or research fits with previous one in the same or similar fields. A typical article in the health sciences refers to approximately 20-30 other articles published in peer reviewed journals, cite once or hundreds times. Citations typically appear in two formats: a) as in-text citations where the sources of information are briefly identified in the text; or b) in the reference list at the end of the publication (book chapter, manuscript, article, etc.) that provides full bibliographic information for each source.
Group of publishers met in Vancouver in 1978 and decided to prescribe uniform technical propositions for publication. Adopted in the 1979 by the National Library of Medicine in Bethesda, then the International Committee of Medical Journals Editors (ICMJE), whose review in 1982 entered the official application by 300 international biomedical journals. Authors writing articles for publication in biomedical publications used predominantly citation styles: Vancouver style, Harward style, PubMed style, ICMJE, APA, etc. The paper gives examples of all of these styles of citation to the authors in order to facilitate their applications. Also in this paper is given the review about the problem of plagiarism which becomes more common in the writing of scientific and technical articles in biomedicine.
doi:10.5455/aim.2013.21.148-155
PMCID: PMC3804522  PMID: 24167381
citing and quoting references; scientometrics; plagiarism.
12.  The Effect of Alternative Graphical Displays Used to Present the Benefits of Antibiotics for Sore Throat on Decisions about Whether to Seek Treatment: A Randomized Trial 
PLoS Medicine  2009;6(8):e1000140.
In a randomized trial, Cheryl Carling and colleagues evaluate how people respond to different statistical presentations regarding the consequences of taking antibiotic treatment for sore throat.
Background
We conducted an Internet-based randomized trial comparing four graphical displays of the benefits of antibiotics for people with sore throat who must decide whether to go to the doctor to seek treatment. Our objective was to determine which display resulted in choices most consistent with participants' values.
Methods and Findings
This was the first of a series of televised trials undertaken in cooperation with the Norwegian Broadcasting Company. We recruited adult volunteers in Norway through a nationally televised weekly health program. Participants went to our Web site and rated the relative importance of the consequences of treatment using visual analogue scales (VAS). They viewed the graphical display (or no information) to which they were randomized and were asked to decide whether to go to the doctor for an antibiotic prescription. We compared four presentations: face icons (happy/sad) or a bar graph showing the proportion of people with symptoms on day three with and without treatment, a bar graph of the average duration of symptoms, and a bar graph of proportion with symptoms on both days three and seven. Before completing the study, all participants were shown all the displays and detailed patient information about the treatment of sore throat and were asked to decide again. We calculated a relative importance score (RIS) by subtracting the VAS scores for the undesirable consequences of antibiotics from the VAS score for the benefit of symptom relief. We used logistic regression to determine the association between participants' RIS and their choice. 1,760 participants completed the study. There were statistically significant differences in the likelihood of choosing to go to the doctor in relation to different values (RIS). Of the four presentations, the bar graph of duration of symptoms resulted in decisions that were most consistent with the more fully informed second decision. Most participants also preferred this presentation (38%) and found it easiest to understand (37%). Participants shown the other three presentations were more likely to decide to go to the doctor based on their first decision than everyone based on the second decision. Participants preferred the graph using faces the least (14.4%).
Conclusions
For decisions about going to the doctor to get antibiotics for sore throat, treatment effects presented by a bar graph showing the duration of symptoms helped people make decisions more consistent with their values than treatment effects presented as graphical displays of proportions of people with sore throat following treatment.
Clinical Trials Registration
ISRCTN58507086
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, patients usually believed that their doctor knew what was best for them and that they had little say in deciding what treatment they would receive. But many modern interventions have complex trade-offs. Patients' opinions about the relative desirability of the possible outcomes of health care interventions depend on their lifestyle and expectations, and these “values” need to be considered when making decisions about medical treatments. Consequently, shared decision-making is increasingly superseding the traditional, paternalistic approach to medical decision-making. In shared decision-making, health care professionals talk to their patients about the risks and benefits of the various treatment options, and patients tell the health care professionals what they expect and/or require from their treatment.
Why Was This Study Done?
Shared decision-making can only succeed if patients know about the treatment options that are available for their medical condition and understand the consequences of each option. But how does the presentation of information about treatment options to patients affect their decisions? In 2002, a series of internet-based randomized trials (studies in which participants are randomly allocated to different “treatment” groups) called the Health Information Project: Presentation Online (HIPPO) was initiated to answer this question. Here, the researchers describe HIPPO 3, a trial that investigates how alternative graphical displays of the benefits of antibiotics for the treatment of sore throat affect whether people decide to seek treatment. In particular, the researchers ask which display results in people making a treatment decision most consistent with their values, i.e., in terms of the relative importance to them of the treatment's desirable and undesirable outcomes.
What Did the Researchers Do and Find?
Adult Norwegians recruited through a television health program numerically rated the importance of symptom relief and of several negative consequences (for example, side effects) of antibiotic treatment for sore throat on the trial's Web site. Relative importance scores (which indicate the participants' values) were calculated for each participant by subtracting their ratings for the importance of the negative consequences of seeking antibiotic treatment from his or her rating for the importance of symptom relief. The participants were then asked to decide whether to visit a doctor for antibiotics without receiving any further information or after being shown one of four graphical displays illustrating the benefits of antibiotic treatment. Two bar charts and one display of happy- and sad-face icons showed the proportion of people with symptoms at specific times after sore throat onset with and without treatment. A third bar chart indicated symptom duration with and without antibiotics. Finally, all the participants were shown all the displays and other information about sore throat and were asked to decide again about seeking treatment. The researchers found a clear association between the participants' values and the likelihood of their deciding to go to the doctor, and this likelihood depended on which graphical display the participants saw. People shown information on the proportion of patients with symptoms were more likely to decide to visit a doctor than those shown information on symptom duration. Furthermore, first decisions reached after being given information on symptom duration or no information were more consistent with the fully informed second decision than first decisions reached after seeing the other displays.
What Do These Findings Mean?
These findings suggest that, for people considering whether to seek antibiotic treatment for sore throat, a bar graph showing the duration of symptoms is more likely to help them make a decision that is consistent with their own values than a bar chart showing the proportions of people with sore throat following treatment. The researchers also found that the bar chart showing symptom duration was preferred by more of the participants than any of the other representations. Whether these results can be applied to other health care decisions or in other settings is not known. However, the researchers suggest that these findings may be most relevant to treatments that, like antibiotic treatment of sore throat, have a short-lived benefit and relatively important downsides.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000140.
A PLoS Medicine Editorial discusses this trial and the results of another HIPPO trial that are presented in a separate PLoS Medicine Research Article by Carling et al.; details of a pilot HIPPO trial are also available
The Foundation for Informed Medical Decision Making (a US-based nonprofit organization) provides information on many aspects of medical decision making
The Dartmouth-Hitchcock Medical Center provides information to help people make health care decisions through its Center for Shared Decision Making
The Ottawa Hospital Research Institute provides information on patient decision aids, including an inventory of decision aids available on the Web (in English and French)
MedlinePlus provides links to information and advice about sore throat (in English and Spanish)
doi:10.1371/journal.pmed.1000140
PMCID: PMC2726763  PMID: 19707579
13.  Towards evidence‐based medicine for paediatricians 
To give the best care to patients and families, paediatricians need to integrate the highest‐quality scientific evidence with clinical expertise and the opinions of the family.1Archimedes seeks to assist practising clinicians by providing “evidence‐based” answers to common questions which are not at the forefront of research but are at the core of practice. In doing this, we are adapting a format that has been successfully developed by Kevin Macaway‐Jones and the group at the Emergency Medicine Journal—“BestBets”.
A word of warning. The topic summaries are not systematic reviews, although they are as exhaustive as a practising clinician can produce. They make no attempt to statistically aggregate the data, nor search the grey, unpublished literature. What Archimedes offers are practical, best evidence‐based answers to practical, clinical questions.
The format of Archimedes may be familiar. A description of the clinical setting is followed by a structured clinical question. (These aid in focusing the mind, assisting searching2 and gaining answers.3) A brief report of the search used follows—this has been carried out in a hierarchical way, to search for the best‐quality evidence to answer the question (http://www.cebm.net/levels_of_evidence.asp). A table provides a summary of the evidence and key points of the critical appraisal. For further information on critical appraisal and the measures of effect (such as number needed to treat), books by Sackett et al4 and Moyer et al5 may help. To pull the information together, a commentary is provided. But to make it all much more accessible, a box provides the clinical bottom lines.
Electronic‐only topics that have been published on the BestBets site (www.bestbets.org) and may be of interest to paediatricians include:
Are meningeal irritation signs reliable in diagnosing meningitis in children?
Is immobilisation effective in Osgood‐Schlatter's disease?
Do all children presenting to the emergency department with a needlestick injury require PEP for HIV to reduce HIV transmission?
Readers wishing to submit their own questions—with best evidence answers—are encouraged to review those already proposed at www.bestbets.org. If your question still has not been answered, feel free to submit your summary according to the Instructions for Authors at www.archdischild.com. Three topics are covered in this issue of the journal.
Is lumbar puncture necessary for evaluation of early neonatal sepsis?
Does the use of calamine or antihistamine provide symptomatic relief from pruritus in children with varicella zoster infection?
Is supplementary iron useful when preterm infants are treated with erythropoietin?
Is more research needed?
“More research is needed” is a phrase you might have read before. But is more research really needed? Two situations are offered to us in Archimedes this month where clinical questions are, as yet, unanswered. Is iron supplementation really necessary for premature infants treated with erythropoietin, and do antihistamines and calamine lotion help in children with chicken pox? How can we decide if these questions really do “need” research? It may be worth thinking of how likely benefits and harms may be, what the importance of these outcomes are and finally, how much would you consider reasonable to pay for the answer? For example, what chance is there that antihistamines work in chickenpox? What is the chance that side effects will occur? What is the relative severity of side effects versus the delight of being itch free? If we pay for research and spend hours and hours of time pressing through the increasing regulatory frameworks for clinical trials to define the answer to this question, what will be the opportunity cost? What would we fail to do by looking at this? The same questions can be asked of iron supplementation in premature infants, the salvage treatment of relapsing systemic histocytosis or the promotion of car‐seat use in low‐income families. Such value judgements are important; they will have different answers from different perspectives; they will be subject to political influences from pressure groups; being aware of them might stop us from frequently expounding “more research is needed”.
References
1Moyer VA, Ellior EJ. Preface. In: Moyer VA, Elliott EJ, Davis RL, et al, eds. Evidence based pediatrics and child health, Issue 1. London: BMJ Books, 2000.
2Richardson WS, Wilson MC, Nishikawa J, et al. The well‐built clinical question: a key to evidence‐based decisions. ACP J Club 1995;123:A12–13.
3Bergus GR, Randall CS, Sinift SD, et al. Does the structure of clinical questions affect the outcome of curbside consultations with specialty colleagues? Arch Fam Med 2000;9:541–7.
4Sackett DL, Starus S, Richardson WS, et al. Evidence‐based medicine. How to practice and teach EBM. San Diego: Harcourt‐Brace, 2000.
5Moyer VA, Elliott EJ, Davis RL, et al, eds. Evidence based pediatrics and child health, Issue 1. London: BMJ Books, 2000.
doi:10.1136/adc.2006.105379
PMCID: PMC2083019
14.  Insights into the Management of Emerging Infections: Regulating Variant Creutzfeldt-Jakob Disease Transfusion Risk in the UK and the US 
PLoS Medicine  2006;3(10):e342.
Background
Variant Creutzfeldt-Jakob disease (vCJD) is a human prion disease caused by infection with the agent of bovine spongiform encephalopathy. After the recognition of vCJD in the UK in 1996, many nations implemented policies intended to reduce the hypothetical risk of transfusion transmission of vCJD. This was despite the fact that no cases of transfusion transmission had yet been identified. In December 2003, however, the first case of vCJD in a recipient of blood from a vCJD-infected donor was announced. The aim of this study is to ascertain and compare the factors that influenced the motivation for and the design of regulations to prevent transfusion transmission of vCJD in the UK and US prior to the recognition of this case.
Methods and Findings
A document search was conducted to identify US and UK governmental policy statements and guidance, transcripts (or minutes when transcripts were not available) of scientific advisory committee meetings, research articles, and editorials published in medical and scientific journals on the topic of vCJD and blood transfusion transmission between March 1996 and December 2003. In addition, 40 interviews were conducted with individuals familiar with the decision-making process and/or the science involved. All documents and transcripts were coded and analyzed according to the methods and principles of grounded theory. Data showed that while resulting policies were based on the available science, social and historical factors played a major role in the motivation for and the design of regulations to protect against transfusion transmission of vCJD. First, recent experience with and collective guilt resulting from the transfusion-transmitted epidemics of HIV/AIDS in both countries served as a major, historically specific impetus for such policies. This history was brought to bear both by hemophilia activists and those charged with regulating blood products in the US and UK. Second, local specificities, such as the recall of blood products for possible vCJD contamination in the UK, contributed to a greater sense of urgency and a speedier implementation of regulations in that country. Third, while the results of scientific studies played a prominent role in the construction of regulations in both nations, this role was shaped by existing social and professional networks. In the UK, early focus on a European study implicating B-lymphocytes as the carrier of prion infectivity in blood led to the introduction of a policy that requires universal leukoreduction of blood components. In the US, early focus on an American study highlighting the ability of plasma to serve as a reservoir of prion infectivity led the FDA and its advisory panel to eschew similar measures.
Conclusions
The results of this study yield three important theoretical insights that pertain to the global management of emerging infectious diseases. First, because the perception and management of disease may be shaped by previous experience with disease, especially catastrophic experience, there is always the possibility for over-management of some possible routes of transmission and relative neglect of others. Second, local specificities within a given nation may influence the temporality of decision making, which in turn may influence the choice of disease management policies. Third, a preference for science-based risk management among nations will not necessarily lead to homogeneous policies. This is because the exposure to and interpretation of scientific results depends on the existing social and professional networks within a given nation. Together, these theoretical insights provide a framework for analyzing and anticipating potential conflicts in the international management of emerging infectious diseases. In addition, this study illustrates the utility of qualitative methods in investigating research questions that are difficult to assess through quantitative means.
A qualitative study of US and UK governmental policy statements on the topic of vCJD and blood transfusion transmission identified factors responsible for differences in the policies adopted.
Editors' Summary
Background.
In 1996 in the UK, a new type of human prion disease was seen for the first time. This is now known as variant Creutzfeldt-Jakob disease (vCJD). Prion diseases are rare brain diseases passed from individual to individual (or between animals) by a particular type of wrongly folded protein, and they are fatal. It was suspected that vCJD had passed to humans from cattle, and that the agent causing vCJD was the same as that causing bovine spongiform encephalopathy (or “mad cow disease”). Shortly after vCJD was recognized, authorities in many countries became concerned about the possibility that it could be transmitted from one person to another through contaminated blood supplies used for transfusion in hospitals. Even though there wasn't any evidence of actual transmission of the disease through blood before December 2003, authorities in the UK, US, and elsewhere set up regulations designed to reduce the chance of that happening. At this early stage in the epidemic, there was little in the way of scientific information about the transmission properties of the disease. Both the UK and US, however, sought to make decisions in a scientific manner. They made use of evidence as it was being produced, often before it had been published. Despite this, the UK and US decided on very different changes to their respective regulations on blood donation. Both countries chose to prevent certain people (who they thought would be at greater risk of having vCJD) from donating blood. In the UK, however, the decision was made to remove white blood cells from donated blood to reduce the risk of transmitting vCJD, while the US decided that such a step was not merited by the evidence.
Why Was This Study Done?
This researcher wanted to understand more clearly why the UK and US ended up with different policies: what role was played by science, and what role was played by non-scientific factors? She hoped that insights from this investigation would also be relevant to similar challenges in the future—for example, as many countries try to work out how to control the threat of avian flu.
What Did the Researcher Do and Find?
The researcher searched for all relevant official government documents from the US and UK, as well as scientific papers, published between the time vCJD was first identified (March 1996) and the first instance of vCJD carried through blood (December 2003). She also interviewed people who knew about vCJD management in the US and UK—for example, members of government agencies and the relevant advisory committees. From the documents and interviews, the researcher picked out and grouped shared ideas. Although these documents and interviews suggested that policy making was rooted in scientific evidence, many non-scientific factors were also important. The researcher found substantial uncertainty in the scientific evidence available at the time. The document search and interviews showed that policy makers felt guilty about a previous experience in which people had become infected with HIV/AIDS through contaminated blood and were concerned about repeating this experience. Finally, in the UK, the possibility of blood contamination was seen as a much more urgent problem than in the US, because BSE and vCJD were found there first and there were far more cases. This meant that when the UK made its decision about whether to remove white blood cells from donated blood, there was less scientific evidence available. In fact, the main study that was relied on at the time would later be questioned.
What Do These Findings Mean?
These findings show that for this particular case, science was not the only factor affecting government policies. Historical and social factors such as previous experience, sense of urgency, public pressure, and the relative importance of different scientific networks were also very important. The study predicts that in the future, infectious disease–related policy decisions are unlikely to be the same across different countries because the interpretation of scientific evidence depends, to a large extent, on social factors.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030342.
National Creutzfeldt-Jakob Disease Surveillance Unit, Edinburgh, UK
US Centers for Disease Control and Prevention pages about prion diseases
World Health Organization variant Creutzfeldt-Jakob disease fact sheet
US National Institute of Neurological Disorders and Stroke information about prion diseases
doi:10.1371/journal.pmed.0030342
PMCID: PMC1621089  PMID: 17076547
15.  Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review 
PLoS Medicine  2014;11(5):e1001645.
Jonathan Cook and colleagues systematically reviewed the literature for methods of determining the target difference for use in calculating the necessary sample size for clinical trials, and discuss which methods are best for various types of trials.
Please see later in the article for the Editors' Summary
Background
Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation.
Methods and Findings
A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size.
Conclusions
A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A clinical trial is a research study in which human volunteers are randomized to receive a given intervention or not, and outcomes are measured in both groups to determine the effect of the intervention. Randomized controlled trials (RCTs) are widely accepted as the preferred study design because by randomly assigning participants to groups, any differences between the two groups, other than the intervention under study, are due to chance. To conduct a RCT, investigators calculate how many patients they need to enroll to determine whether the intervention is effective. The number of patients they need to enroll depends on how effective the intervention is expected to be, or would need to be in order to be clinically important. The assumed difference between the two groups is the target difference. A larger target difference generally means that fewer patients need to be enrolled, relative to a smaller target difference. The target difference and number of patients enrolled contribute to the study's statistical precision, and the ability of the study to determine whether the intervention is effective. Selecting an appropriate target difference is important from both a scientific and ethical standpoint.
Why Was This Study Done?
There are several ways to determine an appropriate target difference. The authors wanted to determine what methods for specifying the target difference are available and when they can be used.
What Did the Researchers Do and Find?
To identify studies that used a method for determining an important and/or realistic difference, the investigators systematically surveyed the research literature. Two reviewers screened each of the abstracts chosen, and a third reviewer was consulted if necessary. The authors identified seven methods to determine target differences. They evaluated the studies to establish similarities and differences of each application. Points about the strengths and limitations of the method and how frequently the method was chosen were also noted.
What Do these Findings Mean?
The study draws attention to an understudied but important part of designing a clinical trial. Enrolling the right number of patients is very important—too few patients and the study may not be able to answer the study question; too many and the study will be more expensive and more difficult to conduct, and will unnecessarily expose more patients to any study risks. The target difference may also be helpful in interpreting the results of the trial. The authors discuss the pros and cons of different ways to calculate target differences and which methods are best for which types of studies, to help inform researchers designing such studies.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001645.
Wikipedia has an entry on sample size determination that discusses the factors that influence sample size calculation, including the target difference and the statistical power of a study (statistical power is the ability of a study to find a difference between treatments when a true difference exists). (Note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages.)
The University of Ottawa has an article that explains how different factors influence the power of a study
doi:10.1371/journal.pmed.1001645
PMCID: PMC4019477  PMID: 24824338
16.  Understanding of Statistical Terms Routinely Used in Meta-Analyses: An International Survey among Researchers 
PLoS ONE  2013;8(1):e47229.
Objective
Biomedical literature is increasingly enriched with literature reviews and meta-analyses. We sought to assess the understanding of statistical terms routinely used in such studies, among researchers.
Methods
An online survey posing 4 clinically-oriented multiple-choice questions was conducted in an international sample of randomly selected corresponding authors of articles indexed by PubMed.
Results
A total of 315 unique complete forms were analyzed (participation rate 39.4%), mostly from Europe (48%), North America (31%), and Asia/Pacific (17%). Only 10.5% of the participants answered correctly all 4 “interpretation” questions while 9.2% answered all questions incorrectly. Regarding each question, 51.1%, 71.4%, and 40.6% of the participants correctly interpreted statistical significance of a given odds ratio, risk ratio, and weighted mean difference with 95% confidence intervals respectively, while 43.5% correctly replied that no statistical model can adjust for clinical heterogeneity. Clinicians had more correct answers than non-clinicians (mean score ± standard deviation: 2.27±1.06 versus 1.83±1.14, p<0.001); among clinicians, there was a trend towards a higher score in medical specialists (2.37±1.07 versus 2.04±1.04, p = 0.06) and a lower score in clinical laboratory specialists (1.7±0.95 versus 2.3±1.06, p = 0.08). No association was observed between the respondents' region or questionnaire completion time and participants' score.
Conclusion
A considerable proportion of researchers, randomly selected from a diverse international sample of biomedical scientists, misinterpreted statistical terms commonly reported in meta-analyses. Authors could be prompted to explicitly interpret their findings to prevent misunderstandings and readers are encouraged to keep up with basic biostatistics.
doi:10.1371/journal.pone.0047229
PMCID: PMC3543405  PMID: 23326299
17.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570
18.  Development and Evaluation of a Pedagogical Tool to Improve Understanding of a Quality Checklist: A Randomised Controlled Trial 
PLoS Clinical Trials  2007;2(5):e22.
Objective:
The aim of this study was to develop and evaluate a pedagogical tool to enhance the understanding of a checklist that evaluates reports of nonpharmacological trials (CLEAR NPT).
Design:
Paired randomised controlled trial.
Participants:
Clinicians and systematic reviewers.
Interventions:
We developed an Internet-based computer learning system (ICLS). This pedagogical tool used many examples from published randomised controlled trials to demonstrate the main coding difficulties encountered when using this checklist.
Randomised participants received either a specific Web-based training with the ICLS (intervention group) or no specific training.
Outcome measures:
The primary outcome was the rate of correct answers compared to a criterion standard for coding a report of randomised controlled trials with the CLEAR NPT.
Results:
Between April and June 2006, 78 participants were randomly assigned to receive training with the ICLS (39) or no training (39). Participants trained by the ICLS did not differ from the control group in performance on the CLEAR NPT. The mean paired difference and corresponding 95% confidence interval was 0.5 (−5.1 to 6.1). The rate of correct answers did not differ between the two groups regardless of the CLEAR NPT item. Combining both groups, the rate of correct answers was high or items related to allocation sequence (79.5%), description of the intervention (82.0%), blinding of patients (79.5%), and follow-up schedule (83.3%). The rate of correct answers was low for items related to allocation concealment (46.1%), co-interventions (30.3%), blinding of outcome assessors (53.8%), specific measures to avoid ascertainment bias (28.6%), and intention-to-treat analysis (60.2%).
Conclusions:
Although we showed no difference in effect between the intervention and control groups, our results highlight the gap in knowledge and urgency for education on important aspects of trial conduct.
Editorial Commentary
Background: A key part of the practice of evidence-based medicine (essentially, the appropriate use of current best evidence in determining care of individual patients) involves appraising the quality of individual research papers. This process helps an individual to understand what has been done in a clinical research study, and to decipher the strengths, limitations, and importance of the work. Several tools already exist to help clinicians and researchers to assess the quality of particular types of study, including randomised controlled trials. One of these tools is called CLEAR NPT, which consists of a checklist that helps individuals to evaluate reports of nonpharmacological trials (i.e., trials not evaluating drugs but other types of intervention, such as surgery). The researchers who developed CLEAR NPT also produced an Internet-based computer learning system to help researchers use CLEAR NPT correctly. They wanted to evaluate to what extent this learning system helped people use CLEAR NPT and, therefore, carried out a randomised trial comparing the learning system to no specific training. A total of 78 health researchers were recruited as the “participants” in the trial, and 39 were randomised to each trial arm. Once the participants had received either the Internet training or no specific training, they used CLEAR NPT to evaluate reports of nonpharmacological trials. The primary outcome was the rate of “correct” answers that study participants gave using CLEAR NPT.
What the trial shows: The researchers found that the results on the primary outcome (rate of correct answers given by study participants) did not differ between the study arms. The rate of correct answers for individual items on the checklist also did not seem to differ between individuals receiving Internet training and those receiving no specific training. When looking at the scores for individual items, combined between the two study arms, participants scored highly on their appraisal of some aspects of trial design (such as generation of randomisation sequences and descriptions of blinding and the intervention) but poorly on other items (such as concealment of the randomisation sequence).
Strengths and limitations: Key strengths of this study include the randomised design and that the trial recruited enough participants to test the primary hypothesis. The failure to find a significant difference between study arms in this trial was likely not due to a lack of statistical power. One limitation of the study is that the group of researchers who participated were already fairly experienced in assessing trial quality at the start, and this may explain why no additional effect of the computer-based learning system was seen. It is possible that the training system may have some benefit for individuals who are less experienced in evaluating trials. A further possible limitation may be that there was a small imbalance at randomisation, with slightly more experienced researchers being recruited into the arm receiving no specific training. This imbalance might have underestimated the effect of the training system.
Contribution to the evidence: The researchers here report that this study is the first they are aware of that evaluates a computer-based learning system for improving assessment of the quality of reporting of randomised trials. The results here find that this particular tool did not improve assessment. However, the results emphasise that training should be considered an important part of the development of any critical appraisal tools.
doi:10.1371/journal.pctr.0020022
PMCID: PMC1865084  PMID: 17479163
19.  Our silent enemy: ashes in our libraries 
Scholars, scientists, physicians, other health professionals, and librarians face a crucial decision today: shall we nourish the biomedical archives as a viable and indispensable source of information, or shall we bury their ashes and lose a century or more of consequential scientific history? Biomedical books and journals published since the 1850s on self-destructing acidic paper are silently and insidiously scorching on our shelves. The associated risks for scientists and physicians are serious—incomplete assessment of past knowledge; unnecessary repetition of studies that have already led to conclusive results; delay in scientific advances when important concepts, techniques, instruments, and procedures are overlooked; faulty comparative analyses; or improper assignment of priority.
The archives also disclose the nature of biomedical research, which builds on past knowledge, advances incrementally, and is strewn with missteps, frustrations, detours, inconsistencies, enigmas, and contradictions. The public's familiarity with the scientific process will avoid unrealistic expectations and will encourage support for research in health. But a proper historical perspective requires access to the biomedical archives. Since journals will apparently continue to be published on paper, it is folly to persist in the use of acidic paper and thus magnify for future librarians and preservationists the already Sisyphean and costly task of deacidifying their collections.
Our plea for conversion to acid-free paper is accompanied by an equally strong appeal for more rigorous criteria for journal publication. The glut of journal articles—many superficial, redundant, mediocre, or otherwise flawed and some even fraudulent—has overloaded our databases, complicated bibliographic research, and exacerbated the preservation problem. Before accepting articles, journal editors should ask: If it is not worth preserving, is it worth publishing?
It is our responsibility to protect the integrity of our biomedical records against all threats. Authors should consider submitting manuscripts to journals that use acid-free paper, especially if they think, as most authors do, that they are writing for posterity. Librarians can refuse to purchase journals published on acidic paper, which they know will need restoration within a few decades and will thus help deplete their budgets. All of us can urge our government to devise a coordinated national conservation policy that will halt the destruction of a century of our historical record. The battle will not be easy, but the challenge beckons urgently. The choice is ours: we can answer the call, or we can deny scientists, physicians, and historians the records they need to expand human knowledge and improve health care.
PMCID: PMC227429  PMID: 2758179
20.  Commercial Serological Antibody Detection Tests for the Diagnosis of Pulmonary Tuberculosis: A Systematic Review 
PLoS Medicine  2007;4(6):e202.
Background
The global tuberculosis epidemic results in nearly 2 million deaths and 9 million new cases of the disease a year. The vast majority of tuberculosis patients live in developing countries, where the diagnosis of tuberculosis relies on the identification of acid-fast bacilli on unprocessed sputum smears using conventional light microscopy. Microscopy has high specificity in tuberculosis-endemic countries, but modest sensitivity which varies among laboratories (range 20% to 80%). Moreover, the sensitivity is poor for paucibacillary disease (e.g., pediatric and HIV-associated tuberculosis). Thus, the development of rapid and accurate new diagnostic tools is imperative. Immune-based tests are potentially suitable for use in low-income countries as some test formats can be performed at the point of care without laboratory equipment. Currently, dozens of distinct commercial antibody detection tests are sold in developing countries. The question is “do they work?”
Methods and Findings
We conducted a systematic review to assess the accuracy of commercial antibody detection tests for the diagnosis of pulmonary tuberculosis. Studies from all countries using culture and/or microscopy smear for confirmation of pulmonary tuberculosis were eligible. Studies with fewer than 50 participants (25 patients and 25 control participants) were excluded. In a comprehensive search, we identified 68 studies. The results demonstrate that (1) overall, commercial tests vary widely in performance; (2) sensitivity is higher in smear-positive than smear-negative samples; (3) in studies of smear-positive patients, Anda-TB IgG by enzyme-linked immunosorbent assay shows limited sensitivity (range 63% to 85%) and inconsistent specificity (range 73% to 100%); (4) specificity is higher in healthy volunteers than in patients in whom tuberculosis disease is initially suspected and subsequently ruled out; and (5) there are insufficient data to determine the accuracy of most commercial tests in smear microscopy–negative patients, as well as their performance in children or persons with HIV infection.
Conclusions
None of the commercial tests evaluated perform well enough to replace sputum smear microscopy. Thus, these tests have little or no role in the diagnosis of pulmonary tuberculosis. Lack of methodological rigor in these studies was identified as a concern. It will be important to review the basic science literature evaluating serological tests for the diagnosis of pulmonary tuberculosis to determine whether useful antigens have been described but their potential has not been fully exploited. Activities leading to the discovery of new antigens with immunodiagnostic potential need to be intensified.
Based on a systematic review, Madhukar Pai and colleagues conclude that none of the commercial immune-based tests for pulmonary tuberculosis so far evaluated perform well enough to replace sputum smear microscopy.
Editors' Summary
Background.
Tuberculosis (TB) is, globally, one of the most important infectious diseases. It is thought that in 2005 around 1.6 million people died as a result of TB. Controlling TB requires that the disease is correctly diagnosed so that it can then be promptly treated, which will reduce the risk of infection being passed on to other individuals. The method normally used for diagnosing TB disease in poor countries (where most people with TB disease live) involves taking a sample of mucus coughed up from the lungs; this mucus is then spread thinly onto a glass slide, dyed, and viewed under the microscope. The bacteria responsible for TB take up the dye in a particular pattern and can be clearly seen under the microscope. Although this test (sputum smear) is relatively straightforward to carry out even where facilities are basic, it is not particularly good at identifying TB disease in children or amongst individuals who are HIV-positive. Finally, the sputum smear test is also not very sensitive; that is, many people who have TB disease may not give a positive reading. Therefore, there is an urgent need to develop and evaluate new tests that are suitable for use in poor countries, which will accurately diagnose TB disease, especially amongst children and people who are HIV-positive.
Why Was This Study Done?
New tests for TB have become available which detect whether an individual has raised antibodies against particular proteins and other substances present on the surface of the TB bacterium. These tests are carried out on blood samples, once blood cells and other factors have been taken out. These antibody tests are often quite simple to carry out, so in principle they could be suitable for use in developing countries. Since the tests are available on the market and can be freely used in some developing countries without any need for government regulatory bodies to approve them, it is important to know how good these tests are at diagnosing TB disease. The researchers here wanted, therefore, to evaluate all of the available data relating to the accuracy of antibody detection tests for diagnosis of TB disease.
What Did the Researchers Do and Find?
In order to evaluate all of the information available on commercial antibody detection tests for diagnosis of TB disease of the lungs, the researchers carried out a systematic review. First, they searched biomedical literature databases using specific terms to identify studies for inclusion. A study was included in their analysis if the commercial test was compared against one of two other standard tests (sputum smear microscopy, or growth of TB bacteria in culture). One researcher from the team then pulled out specific pieces of information from each published study: these included the type of study design; information on study participants; the type of test; what the test was compared against; and finally the results of evaluation of the test. A second researcher pulled out pieces of information from several of the same studies. The researchers then compared the information to ensure that it was recorded correctly. Each study was also assigned a quality rating, based on four distinct criteria. For each type of test, the researchers used the data in the published studies to work out the test's accuracy, both in terms of its ability to give a positive reading for people who have TB disease as well as its ability to give a negative reading for people who do not have TB disease.
The researchers found 27 papers meeting their criteria. These papers reported the results of 68 original studies. Nine different commercial tests were examined in the studies. Overall, the studies seemed to be of relatively poor quality, with only 25% of them meeting all four of the researchers' criteria for a good-quality study. The different studies appeared to produce varying results for the accuracy of these commercial tests. In particular, the tests seemed to be less accurate at detecting TB disease amongst people who had a negative sputum smear than amongst people with a positive sputum smear. When all the data for these different studies were combined, the statistics indicated that the commercial tests, overall, were only modestly accurate for diagnosis of TB disease. None of the studies had been carried out in children or in HIV-positive people.
What Do These Findings Mean?
The results of this systematic review suggest that the commercial antibody detection tests considered here are not particularly useful in diagnosis of TB disease as compared to other tests, such as sputum smear and bacterial culture. Some people are concerned that there is pressure in certain developing countries to start using these tests, but the current data do not support greater use. This systematic review also highlights the fact that many studies evaluating commercial TB tests are of poor quality, and that further research needs to be done to evaluate the accuracy of different TB tests amongst children and HIV-positive patients.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040202.
World Health Organization Stop TB Department website. Information about the current Stop TB strategy, data and factsheets about TB, and other resources are available
Questions and Answers about Tuberculosis provided by the US Centers for Disease Control and Prevention
Information about TB tests from Médicins sans Frontières (MSF). Links to MSF reports on new diagnostic tests are also available
Wikipedia entry on Systematic Reviews (Note: Wikipedia is an internet encyclopedia anyone can edit)
doi:10.1371/journal.pmed.0040202
PMCID: PMC1891320  PMID: 17564490
21.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
22.  Epidemiology and Reporting Characteristics of Systematic Reviews 
PLoS Medicine  2007;4(3):e78.
Background
Systematic reviews (SRs) have become increasingly popular to a wide range of stakeholders. We set out to capture a representative cross-sectional sample of published SRs and examine them in terms of a broad range of epidemiological, descriptive, and reporting characteristics, including emerging aspects not previously examined.
Methods and Findings
We searched Medline for SRs indexed during November 2004 and written in English. Citations were screened and those meeting our inclusion criteria were retained. Data were collected using a 51-item data collection form designed to assess the epidemiological and reporting details and the bias-related aspects of the reviews. The data were analyzed descriptively. In total 300 SRs were identified, suggesting a current annual publication rate of about 2,500, involving more than 33,700 separate studies including one-third of a million participants. The majority (272 [90.7%]) of SRs were reported in specialty journals. Most reviews (213 [71.0%]) were categorized as therapeutic, and included a median of 16 studies involving 1,112 participants. Funding sources were not reported in more than one-third (122 [40.7%]) of the reviews. Reviews typically searched a median of three electronic databases and two other sources, although only about two-thirds (208 [69.3%]) of them reported the years searched. Most (197/295 [66.8%]) reviews reported information about quality assessment, while few (68/294 [23.1%]) reported assessing for publication bias. A little over half (161/300 [53.7%]) of the SRs reported combining their results statistically, of which most (147/161 [91.3%]) assessed for consistency across studies. Few (53 [17.7%]) SRs reported being updates of previously completed reviews. No review had a registration number. Only half (150 [50.0%]) of the reviews used the term “systematic review” or “meta-analysis” in the title or abstract. There were large differences between Cochrane reviews and non-Cochrane reviews in the quality of reporting several characteristics.
Conclusions
SRs are now produced in large numbers, and our data suggest that the quality of their reporting is inconsistent. This situation might be improved if more widely agreed upon evidence-based reporting guidelines were endorsed and adhered to by authors and journals. These results substantiate the view that readers should not accept SRs uncritically.
Data were collected on the epidemiological, descriptive, and reporting characteristics of recent systematic reviews. A descriptive analysis found inconsistencies in the quality of reporting.
Editors' Summary
Background.
In health care it is important to assess all the evidence available about what causes a disease or the best way to prevent, diagnose, or treat it. Decisions should not be made simply on the basis of—for example—the latest or biggest research study, but after a full consideration of the findings from all the research of good quality that has so far been conducted on the issue in question. This approach is known as “evidence-based medicine” (EBM). A report that is based on a search for studies addressing a clearly defined question, a quality assessment of the studies found, and a synthesis of the research findings, is known as a systematic review (SR). Conducting an SR is itself regarded as a research project and the methods involved can be quite complex. In particular, as with other forms of research, it is important to do everything possible to reduce bias. The leading role in developing the SR concept and the methods that should be used has been played by an international network called the Cochrane Collaboration (see “Additional Information” below), which was launched in 1992. However, SRs are now becoming commonplace. Many articles published in journals and elsewhere are described as being systematic reviews.
Why Was This Study Done?
Since systematic reviews are claimed to be the best source of evidence, it is important that they should be well conducted and that bias should not have influenced the conclusions drawn in the review. Just because the authors of a paper that discusses evidence on a particular topic claim that they have done their review “systematically,” it does not guarantee that their methods have been sound and that their report is of good quality. However, if they have reported details of their methods, then it can help users of the review decide whether they are looking at a review with conclusions they can rely on. The authors of this PLoS Medicine article wanted to find out how many SRs are now being published, where they are being published, and what questions they are addressing. They also wanted to see how well the methods of SRs are being reported.
What Did the Researchers Do and Find?
They picked one month and looked for all the SRs added to the main list of medical literature in that month. They found 300, on a range of topics and in a variety of medical journals. They estimate that about 20% of reviews appearing each year are published by the Cochrane Collaboration. They found many cases in which important aspects of the methods used were not reported. For example, about a third of the SRs did not report how (if at all) the quality of the studies found in the search had been assessed. An important assessment, which analyzes for “publication bias,” was reported as having been done in only about a quarter of the cases. Most of the reporting failures were in the “non-Cochrane” reviews.
What Do These Findings Mean?
The authors concluded that the standards of reporting of SRs vary widely and that readers should, therefore, not accept the conclusions of SRs uncritically. To improve this situation, they urge that guidelines be drawn up regarding how SRs are reported. The writers of SRs and also the journals that publish them should follow these guidelines.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040078.
An editorial discussing this research article and its relevance to medical publishing appears in the same issue of PLoS Medicine
A good source of information on the evidence-based approach to medicine is the James Lind Library
The Web site of the Cochrane Collaboration is a good source of information on systematic reviews. In particular there is a newcomers' guide and information for health care “consumers”. From this Web site, it is also possible to see summaries of the SRs published by the Cochrane Collaboration (readers in some countries can also view the complete SRs free of charge)
Information on the practice of evidence-based medicine is available from the US Agency for Healthcare Research and Quality and the Canadian Agency for Drugs and Technologies in Health
doi:10.1371/journal.pmed.0040078
PMCID: PMC1831728  PMID: 17388659
23.  Towards evidence based medicine for paediatricians 
In order to give the best care to patients and families, paediatricians need to integrate the highest quality scientific evidence with clinical expertise and the opinions of the family.1Archimedes seeks to assist practising clinicians by providing “evidence based” answers to common questions which are not at the forefront of research but are at the core of practice. In doing this, we are adapting a format which has been successfully developed by Kevin Macaway‐Jones and the group at the Emergency Medicine Journal—“BestBets”.
A word of warning. The topic summaries are not systematic reviews, through they are as exhaustive as a practising clinician can produce. They make no attempt to statistically aggregate the data, nor search the grey, unpublished literature. What Archimedes offers are practical, best evidence based answers to practical, clinical questions.
The format of Archimedes may be familiar. A description of the clinical setting is followed by a structured clinical question. (These aid in focusing the mind, assisting searching,2 and gaining answers.3) A brief report of the search used follows—this has been performed in a hierarchical way, to search for the best quality evidence to answer the question.4 A table provides a summary of the evidence and key points of the critical appraisal. For further information on critical appraisal, and the measures of effect (such as number needed to treat, NNT) books by Sackett5 and Moyer6 may help. To pull the information together, a commentary is provided. But to make it all much more accessible, a box provides the clinical bottom lines.
Readers wishing to submit their own questions—with best evidence answers—are encouraged to review those already proposed at www.bestbets.org. If your question still hasn't been answered, feel free to submit your summary according to the Instructions for Authors at www.archdischild.com. Three topics are covered in this issue of the journal:
Does neonatal BCG vaccination protect against tuberculous meningitis?
Does dexamethasone reduce the risk of extubation failure in ventilated children?
Should metformin be prescribed to overweight adolescents in whom dietary/behavioural modifications have not helped?
REFERENCES
1. Moyer VA, Ellior EJ. Preface. In: Moyer VA, Elliott EJ, Davis RL, et al, eds. Evidence based pediatrics and child health, Issue 1. London: BMJ Books, 2000.
2. Richardson WS, Wilson MC, Nishikawa J, et al. The well‐built clinical question: a key to evidence‐based decisions. ACP J Club 1995;123:A12–13.
3. Bergus GR, Randall CS, Sinift SD, et al. Does the structure of clinical questions affect the outcome of curbside consultations with specialty colleagues? Arch Fam Med 2000;9:541–7.
4. http://cebm.jr2.ox.ac.uk/docs/levels.htm (accessed July 2002).
5. Sackett DL, Starus S, Richardson WS, et al. Evidence‐based medicine. How to practice and teach EBM. San Diego: Harcourt‐Brace, 2000.
6. Moyer VA, Elliott EJ, Davis RL, et al, eds. Evidence based pediatrics and child health, Issue 1. London: BMJ Books, 2000.
How to read your journals
Most people have their journals land, monthly, weekly, or quarterly, on their desk, courtesy of their professional associations. Then they sit, gathering dust and guilt, for a period of time. When the layer of either is too great for comfort (or the desk space is needed for some proper work), the wrapper is removed and the journal scanned. But does how people read reflect their information needs or their entertainment requirements?
It is not uncommon to find people straying from the editorial introduction to the value added sections (like obituaries, Lucina‐like summary pages, and end‐of‐article fillers) rather than face the impenetrable science that sits between them. I think that this is probably unhelpful, and would urge readers to do one more thing before placing the journal in the recycling. Scan the table of contents; if it mentions a systematic review or a randomised trial, then read at least the title and the abstract's conclusions. If you agree, pat yourself warmly on the back for being evidence based and up‐to‐date. If you disagree, ask if it will make any impact on your clinical (or personal) life. If it might, run through the methods and quickly appraise them. Does it supply higher quality evidence than that you already possess? If it does, it's worth reading. If it doesn't, don't bother too much.
There are new innovations which might aid the tedious task of consuming research effort. The on‐line Précis section of the Archives provides a highly readable version of the contents page to whet one's appetite. Finally, it's worth mentioning that evidence based summary materials (like Archimedes, or Journal Watch) are always worth reading—and if you didn't think that you wouldn't be here, would you?
PMCID: PMC2082933
Archimedes; evidence based medicine
24.  Acupuncture and Counselling for Depression in Primary Care: A Randomised Controlled Trial 
PLoS Medicine  2013;10(9):e1001518.
In a randomized controlled trial, Hugh MacPherson and colleagues investigate the effectiveness of acupuncture and counseling compared with usual care alone for the treatment of depression symptoms in primary care settings.
Please see later in the article for the Editors' Summary
Background
Depression is a significant cause of morbidity. Many patients have communicated an interest in non-pharmacological therapies to their general practitioners. Systematic reviews of acupuncture and counselling for depression in primary care have identified limited evidence. The aim of this study was to evaluate acupuncture versus usual care and counselling versus usual care for patients who continue to experience depression in primary care.
Methods and Findings
In a randomised controlled trial, 755 patients with depression (Beck Depression Inventory BDI-II score ≥20) were recruited from 27 primary care practices in the North of England. Patients were randomised to one of three arms using a ratio of 2∶2∶1 to acupuncture (302), counselling (302), and usual care alone (151). The primary outcome was the difference in mean Patient Health Questionnaire (PHQ-9) scores at 3 months with secondary analyses over 12 months follow-up. Analysis was by intention-to-treat.
PHQ-9 data were available for 614 patients at 3 months and 572 patients at 12 months. Patients attended a mean of ten sessions for acupuncture and nine sessions for counselling. Compared to usual care, there was a statistically significant reduction in mean PHQ-9 depression scores at 3 months for acupuncture (−2.46, 95% CI −3.72 to −1.21) and counselling (−1.73, 95% CI −3.00 to −0.45), and over 12 months for acupuncture (−1.55, 95% CI −2.41 to −0.70) and counselling (−1.50, 95% CI −2.43 to −0.58). Differences between acupuncture and counselling were not significant. In terms of limitations, the trial was not designed to separate out specific from non-specific effects. No serious treatment-related adverse events were reported.
Conclusions
In this randomised controlled trial of acupuncture and counselling for patients presenting with depression, after having consulted their general practitioner in primary care, both interventions were associated with significantly reduced depression at 3 months when compared to usual care alone.
Trial Registration
Controlled-Trials.com ISRCTN63787732
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Depression–overwhelming sadness and hopelessness–is responsible for a substantial proportion of the global disease burden and is a major cause of suicide. It affects more than 350 million people worldwide and about one in six people will have an episode of depression during their lifetime. Depression is different from everyday mood fluctuations. For people who are clinically depressed, feelings of severe sadness, anxiety, hopelessness, and worthlessness can last for months and years. Affected individuals lose interest in activities they used to enjoy and sometimes have physical symptoms such as disturbed sleep. Clinicians can diagnose depression and determine its severity by asking patients to complete a questionnaire (for example, the Beck Depression Inventory [BDI-II] or the Patient Health Questionnaire 9 [PHQ-9]) about their feelings and symptoms. The answer to each question is given a score and the total score from the questionnaire (“depression rating scale”) indicates the severity of depression. Antidepressant drugs are usually the front-line treatment for depression in primary care.
Why Was This Study Done?
Unfortunately, antidepressants don't work for more than half of patients. Moreover, many patients would like to be offered non-pharmacological treatment options for depression such as acupuncture–a therapy originating from China in which fine needles are inserted into the skin at specific points of the body–and counseling–a “talking therapy” that provides patients with a safe, non-judgmental place to express feelings and emotions and that helps them recognize their capacity for growth and fulfillment. However, it is unclear whether either of these treatments is effective in depression. In this pragmatic randomized controlled trial, the researchers investigate the clinical effectiveness of acupuncture or counseling in patients with depression compared to usual care in primary care in northern England. A randomized controlled trial compares outcomes in groups of patients who are assigned to different interventions through the play of chance. A pragmatic trial asks whether the intervention works under real-life conditions. Patient selection reflects routine practice and some aspects of the intervention are left to the discretion of clinician, By contrast, an explanatory trial asks whether an intervention works under ideal conditions and involves a strict protocol for patient selection and treatment.
What Did the Researchers Do and Find?
The researchers recruited 755 patients who had consulted their primary health care provider about depression within the past 5 years and who had a score of more than 20 on the BDI-II–a score that is defined as moderate-to-severe depression on this depression rating scale–at the start of the study. Patients were randomized to receive up to 12 weekly sessions of acupuncture plus usual care (302 patients), up to 12 weekly sessions of counseling plus usual care (302 patients), or usual care alone (151 patients). Both the acupuncture protocol and the counseling protocols allowed for some individualization of treatment. Usual care, including antidepressants, was available according to need and monitored in all three groups. Compared to usual care alone, there was a significant reduction (a reduction unlikely to have occurred by chance) in the average PHQ-9 scores at both 3 and 6 months for both the acupuncture and counseling interventions. The difference between the mean PHQ-9 score for acupuncture and counseling was not significant. At 9 months and 12 months, because of improvements in the PHQ-9 scores in the usual care group, acupuncture and counseling were no longer significantly better than usual care.
What Do These Findings Mean?
These findings suggest that, compared to usual care alone, both acupuncture and counseling when provided alongside usual care provided significant benefits at 3 months in primary care to patients with recurring depression. Because this trial was a pragmatic trial, these findings cannot indicate which aspects of acupuncture and counseling are likely to be most or least beneficial. Nevertheless they do provide an estimate of the overall effects of these complex interventions, an estimate that is of most interest to patients, practitioners, and health care providers. Moreover, because this trial only considers the effect of these interventions on patients with moderate-to-severe depression as classified by the BDI-II; it provides no information about the effectiveness of acupuncture or counseling compared to usual care for patients with mild depression. Importantly, however, these findings suggest that further research into optimal treatment regimens for the treatment of depression with acupuncture and counseling is merited.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001518.
The US National Institute of Mental Health provides information on all aspects of depression (in English and Spanish)
The UK National Health Service Choices website provides detailed information about depression, including personal stories about depression, and information on counseling and acupuncture
The UK charity Mind provides information on depression, on talking treatments, and on complementary and alternative therapies including acupuncture; Mind also includes personal stories about depression on its website
More personal stories about depression are available from Healthtalkonline
MedlinePlus provides links to other resources about depression and about acupuncture (in English and Spanish)
doi:10.1371/journal.pmed.1001518
PMCID: PMC3782410  PMID: 24086114
25.  Facilitating the Recruitment of Minority Ethnic People into Research: Qualitative Case Study of South Asians and Asthma 
PLoS Medicine  2009;6(10):e1000148.
Aziz Sheikh and colleagues report on a qualitative study in the US and the UK to investigate ways to bolster recruitment of South Asians into asthma studies, including making inclusion of diverse populations mandatory.
Background
There is international interest in enhancing recruitment of minority ethnic people into research, particularly in disease areas with substantial ethnic inequalities. A recent systematic review and meta-analysis found that UK South Asians are at three times increased risk of hospitalisation for asthma when compared to white Europeans. US asthma trials are far more likely to report enrolling minority ethnic people into studies than those conducted in Europe. We investigated approaches to bolster recruitment of South Asians into UK asthma studies through qualitative research with US and UK researchers, and UK community leaders.
Methods and Findings
Interviews were conducted with 36 researchers (19 UK and 17 US) from diverse disciplinary backgrounds and ten community leaders from a range of ethnic, religious, and linguistic backgrounds, followed by self-completion questionnaires. Interviews were digitally recorded, translated where necessary, and transcribed. The Framework approach was used for analysis. Barriers to ethnic minority participation revolved around five key themes: (i) researchers' own attitudes, which ranged from empathy to antipathy to (in a minority of cases) misgivings about the scientific importance of the question under study; (ii) stereotypes and prejudices about the difficulties in engaging with minority ethnic populations; (iii) the logistical challenges posed by language, cultural differences, and research costs set against the need to demonstrate value for money; (iv) the unique contexts of the two countries; and (v) poorly developed understanding amongst some minority ethnic leaders of what research entails and aims to achieve. US researchers were considerably more positive than their UK counterparts about the importance and logistics of including ethnic minorities, which appeared to a large extent to reflect the longer-term impact of the National Institutes of Health's requirement to include minority ethnic people.
Conclusions
Most researchers and community leaders view the broadening of participation in research as important and are reasonably optimistic about the feasibility of recruiting South Asians into asthma studies provided that the barriers can be overcome. Suggested strategies for improving recruitment in the UK included a considerably improved support structure to provide academics with essential contextual information (e.g., languages of particular importance and contact with local gatekeepers), and the need to ensure that care is taken to engage with the minority ethnic communities in ways that are both culturally appropriate and sustainable; ensuring reciprocal benefits was seen as one key way of avoiding gatekeeper fatigue. Although voluntary measures to encourage researchers may have some impact, greater impact might be achieved if UK funding bodies followed the lead of the US National Institutes of Health requiring recruitment of ethnic minorities. Such a move is, however, likely in the short- to medium-term, to prove unpopular with many UK academics because of the added “hassle” factor in engaging with more diverse populations than many have hitherto been accustomed to.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In an ideal world, everyone would have the same access to health care and the same health outcomes (responses to health interventions). However, health inequalities—gaps in health care and in health between different parts of the population—exist in many countries. In particular, people belonging to ethnic minorities in the UK, the US, and elsewhere have poorer health outcomes for several conditions than people belonging to the ethnic majority (ethnicity is defined by social characteristics such as cultural tradition or national origin). For example, in the UK, people whose ancestors came from the Indian subcontinent (also known as South Asians and comprising in the main of people of Indian, Pakistani, and Bangladeshi origin) are three times as likely to be admitted to hospital for asthma as white Europeans. The reasons underpinning ethnic health inequalities are complex. Some inequalities may reflect intrinsic differences between groups of people—some ethnic minorities may inherit genes that alter their susceptibility to a specific disease. Other ethnic health inequalities may arise because of differences in socioeconomic status or because different cultural traditions affect the uptake of health care services.
Why Was This Study Done?
Minority ethnic groups are often under-represented in health research, which could limit the generalizability of research findings. That is, an asthma treatment that works well in a trial where all the participants are white Europeans might not be suitable for South Asians. Clinicians might nevertheless use the treatment in all their patients irrespective of their ethnicity and thus inadvertently increase ethnic health inequality. So, how can ethnic minorities be encouraged to enroll into research studies? In this qualitative study, the investigators try to answer this question by talking to US and UK asthma researchers and UK community leaders about how they feel about enrolling ethnic minorities into research studies. The investigators chose to compare the feelings of US and UK asthma researchers because minority ethnic people are more likely to enroll into US asthma studies than into UK studies, possibly because the US National Institute of Health's (NIH) Revitalization Act 1993 mandates that all NIH-funded clinical research must include people from ethnic minority groups; there is no similar mandatory policy in the UK.
What Did the Researchers Do and Find?
The investigators interviewed 16 UK and 17 US asthma researchers and three UK social researchers with experience of working with ethnic minorities. They also interviewed ten community leaders from diverse ethnic, religious and linguistic backgrounds. They then analyzed the interviews using the “Framework” approach, an analytical method in which qualitative data are classified and organized according to key themes and then interpreted. By comparing the data from the UK and US researchers, the investigators identified several barriers to ethnic minority participation in health research including: the attitudes of researchers towards the scientific importance of recruiting ethnic minority people into health research studies; prejudices about the difficulties of including ethnic minorities in health research; and the logistical challenges posed by language and cultural differences. In general, the US researchers were more positive than their UK counterparts about the importance and logistics of including ethnic minorities in health research. Finally, the investigators found that some community leaders had a poor understanding of what research entails and about its aims.
What Do These Findings Mean?
These findings reveal a large gap between US and UK researchers in terms of policy, attitudes, practices, and experiences in relation to including ethnic minorities in asthma research. However, they also suggest that most UK researchers and community leaders believe that it is both important and feasible to increase the participation of South Asians in asthma studies. Although some of these findings may have been affected by the study participants sometimes feeling obliged to give “politically correct” answers, these findings are likely to be generalizable to other diseases and to other parts of Europe. Given their findings, the researchers warn that a voluntary code of practice that encourages the recruitment of ethnic minority people into health research studies is unlikely to be successful. Instead, they suggest, the best way to increase the representation of ethnic minority people in health research in the UK might be to follow the US lead and introduce a policy that requires their inclusion in such research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000148.
Families USA, a US nonprofit organization that campaigns for high-quality, affordable health care for all Americans, has information about many aspects of minority health in the US, including an interactive game about minority health issues
The US Agency for Healthcare Research and Quality has a section on minority health
The UK Department of Health provides information on health inequalities and a recent report on the experiences of patients in Black and minority ethnic groups
The UK Parliamentary Office of Science and Technology also has a short article on ethnicity and health
Information on the NIH Revitalization Act 1993 is available
NHS Evidences Ethnicity and Health has a variety of policy, clinical, and research resources on ethnicity and health
doi:10.1371/journal.pmed.1000148
PMCID: PMC2752116  PMID: 19823568

Results 1-25 (693970)