PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (198)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
more »
1.  Thanks to all those who reviewed for Trials in 2014 
Trials  2015;16:55.
Contributing reviewers
A peer-reviewed journal would not survive without the generous time and insightful comments of the reviewers, whose efforts often go unrecognized. Although final decisions are always editorial, they are greatly facilitated by the deeper technical knowledge, scientific insights, understanding of social consequences, and passion that reviewers bring to our deliberations. For these reasons, the Editors-in-Chief and staff of the journal warmly thank the 610 reviewers whose comments helped to shape Trials, for their invaluable assistance with review of manuscripts for the journal in Volume 15 (2014).
doi:10.1186/s13063-015-0571-y
PMCID: PMC4336513
3.  The National Institutes of Health and guidance for reporting preclinical research 
BMC Medicine  2015;13:34.
The quality of reporting clinical and preclinical research is not optimal. Reporting guidelines can help make reports of research more complete and transparent, thus increasing their value and making them more useful to all readers. Getting reporting guidelines into practice is complex and expensive, and involves several stakeholders, including prospective authors, peer reviewers, journal editors, guideline developers, and implementation scientists. Working together will help ensure their maximum uptake and penetration. We are all responsible for helping to ensure that all research is reported so completely that it is of value to everybody.
Please see related article: http://dx.doi.org/10.1186/s12916-015-0266-y
doi:10.1186/s12916-015-0284-9
PMCID: PMC4332445
Implementation; Preclinical research; Quality of reporting; Reporting guidelines
4.  The science of clinical practice: disease diagnosis or patient prognosis? Evidence about “what is likely to happen” should shape clinical practice 
BMC Medicine  2015;13:20.
Background
Diagnosis is the traditional basis for decision-making in clinical practice. Evidence is often lacking about future benefits and harms of these decisions for patients diagnosed with and without disease. We propose that a model of clinical practice focused on patient prognosis and predicting the likelihood of future outcomes may be more useful.
Discussion
Disease diagnosis can provide crucial information for clinical decisions that influence outcome in serious acute illness. However, the central role of diagnosis in clinical practice is challenged by evidence that it does not always benefit patients and that factors other than disease are important in determining patient outcome. The concept of disease as a dichotomous ‘yes’ or ‘no’ is challenged by the frequent use of diagnostic indicators with continuous distributions, such as blood sugar, which are better understood as contributing information about the probability of a patient’s future outcome. Moreover, many illnesses, such as chronic fatigue, cannot usefully be labelled from a disease-diagnosis perspective. In such cases, a prognostic model provides an alternative framework for clinical practice that extends beyond disease and diagnosis and incorporates a wide range of information to predict future patient outcomes and to guide decisions to improve them. Such information embraces non-disease factors and genetic and other biomarkers which influence outcome.
Summary
Patient prognosis can provide the framework for modern clinical practice to integrate information from the expanding biological, social, and clinical database for more effective and efficient care.
doi:10.1186/s12916-014-0265-4
PMCID: PMC4311412  PMID: 25637245
Clinical decision-making; Contested diagnoses; Diagnosis; Evidence-based medicine; Information; Outcomes of care; Overdiagnosis; Prognosis; Stratified medicine
5.  The natural history of conducting and reporting clinical trials: interviews with trialists 
Trials  2015;16:16.
Background
To investigate the nature of the research process as a whole, factors that might influence the way in which research is carried out, and how researchers ultimately report their findings.
Methods
Semi-structured qualitative telephone interviews with authors of trials, identified from two sources: trials published since 2002 included in Cochrane systematic reviews selected for the ORBIT project; and trial reports randomly sampled from 14,758 indexed on PubMed over the 12-month period from August 2007 to July 2008.
Results
A total of 268 trials were identified for inclusion, 183 published since 2002 and included in the Cochrane systematic reviews selected for the ORBIT project and 85 randomly selected published trials indexed on PubMed. The response rate from researchers in the former group was 21% (38/183) and in the latter group was 25% (21/85). Overall, 59 trialists were interviewed from the two different sources. A number of major but related themes emerged regarding the conduct and reporting of trials: establishment of the research question; identification of outcome variables; use of and adherence to the study protocol; conduct of the research; reporting and publishing of findings. Our results reveal that, although a substantial proportion of trialists identify outcome variables based on their clinical experience and knowing experts in the field, there can be insufficient reference to previous research in the planning of a new trial. We have revealed problems with trial recruitment: not reaching the target sample size, over-estimation of recruitment potential and recruiting clinicians not being in equipoise. We found a wide variation in the completeness of protocols, in terms of detailing study rationale, outlining the proposed methods, trial organisation and ethical considerations.
Conclusion
Our results confirm that the conduct and reporting of some trials can be inadequate. Interviews with researchers identified aspects of clinical research that can be especially challenging: establishing appropriate and relevant outcome variables to measure, use of and adherence to the study protocol, recruiting of study participants and reporting and publishing the study findings. Our trialists considered the prestige and impact factors of academic journals to be the most important criteria for selecting those to which they would submit manuscripts.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-014-0536-6) contains supplementary material, which is available to authorized users.
doi:10.1186/s13063-014-0536-6
PMCID: PMC4322554  PMID: 25619208
Qualitative; Interviews; Trialists; Research reporting; Recruitment; Trial protocols; Equipoise
6.  Specifying the target difference in the primary outcome for a randomised controlled trial: guidance for researchers 
Trials  2015;16:12.
Background
Central to the design of a randomised controlled trial is the calculation of the number of participants needed. This is typically achieved by specifying a target difference and calculating the corresponding sample size, which provides reassurance that the trial will have the required statistical power (at the planned statistical significance level) to identify whether a difference of a particular magnitude exists. Beyond pure statistical or scientific concerns, it is ethically imperative that an appropriate number of participants should be recruited. Despite the critical role of the target difference for the primary outcome in the design of randomised controlled trials, its determination has received surprisingly little attention. This article provides guidance on the specification of the target difference for the primary outcome in a sample size calculation for a two parallel group randomised controlled trial with a superiority question.
Methods
This work was part of the DELTA (Difference ELicitation in TriAls) project. Draft guidance was developed by the project steering and advisory groups utilising the results of the systematic review and surveys. Findings were circulated and presented to members of the combined group at a face-to-face meeting, along with a proposed outline of the guidance document structure, containing recommendations and reporting items for a trial protocol and report. The guidance and was subsequently drafted and circulated for further comment before finalisation.
Results
Guidance on specification of a target difference in the primary outcome for a two group parallel randomised controlled trial was produced. Additionally, a list of reporting items for protocols and trial reports was generated.
Conclusions
Specification of the target difference for the primary outcome is a key component of a randomized controlled trial sample size calculation. There is a need for better justification of the target difference and reporting of its specification.
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-014-0526-8) contains supplementary material, which is available to authorized users.
doi:10.1186/s13063-014-0526-8
PMCID: PMC4302137
Target difference; clinically important difference; sample size; randomised controlled trial; guidance
7.  Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement 
BMC Medicine  2015;13:1.
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Editors’ note: In order to encourage dissemination of the TRIPOD Statement, this article is freely accessible on the Annals of Internal Medicine Web site (www.annals.org) and will be also published in BJOG, British Journal of Cancer, British Journal of Surgery, BMC Medicine, British Medical Journal, Circulation, Diabetic Medicine, European Journal of Clinical Investigation, European Urology, and Journal of Clinical Epidemiology. The authors jointly hold the copyright of this article. An accompanying Explanation and Elaboration article is freely available only on www.annals.org; Annals of Internal Medicine holds copyright for that article.
doi:10.1186/s12916-014-0241-z
PMCID: PMC4284921  PMID: 25563062
Prediction models; Prognostic; Diagnostic; Model development; Validation; Transparency; Reporting
8.  Multi-Reader Multi-Case Studies Using the Area under the Receiver Operator Characteristic Curve as a Measure of Diagnostic Accuracy: Systematic Review with a Focus on Quality of Data Reporting 
PLoS ONE  2014;9(12):e116018.
Introduction
We examined the design, analysis and reporting in multi-reader multi-case (MRMC) research studies using the area under the receiver-operating curve (ROC AUC) as a measure of diagnostic performance.
Methods
We performed a systematic literature review from 2005 to 2013 inclusive to identify a minimum 50 studies. Articles of diagnostic test accuracy in humans were identified via their citation of key methodological articles dealing with MRMC ROC AUC. Two researchers in consensus then extracted information from primary articles relating to study characteristics and design, methods for reporting study outcomes, model fitting, model assumptions, presentation of results, and interpretation of findings. Results were summarized and presented with a descriptive analysis.
Results
Sixty-four full papers were retrieved from 475 identified citations and ultimately 49 articles describing 51 studies were reviewed and extracted. Radiological imaging was the index test in all. Most studies focused on lesion detection vs. characterization and used less than 10 readers. Only 6 (12%) studies trained readers in advance to use the confidence scale used to build the ROC curve. Overall, description of confidence scores, the ROC curve and its analysis was often incomplete. For example, 21 (41%) studies presented no ROC curve and only 3 (6%) described the distribution of confidence scores. Of 30 studies presenting curves, only 4 (13%) presented the data points underlying the curve, thereby allowing assessment of extrapolation. The mean change in AUC was 0.05 (−0.05 to 0.28). Non-significant change in AUC was attributed to underpowering rather than the diagnostic test failing to improve diagnostic accuracy.
Conclusions
Data reporting in MRMC studies using ROC AUC as an outcome measure is frequently incomplete, hampering understanding of methods and the reliability of results and study conclusions. Authors using this analysis should be encouraged to provide a full description of their methods and results.
doi:10.1371/journal.pone.0116018
PMCID: PMC4277459  PMID: 25541977
9.  Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies: The CHARMS Checklist 
PLoS Medicine  2014;11(10):e1001744.
Carl Moons and colleagues provide a checklist and background explanation for critically appraising and extracting data from systematic reviews of prognostic and diagnostic prediction modelling studies.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001744
PMCID: PMC4196729  PMID: 25314315
10.  Assessing new methods of clinical measurement 
doi:10.3399/bjgp09X420905
PMCID: PMC2688042  PMID: 19520023
11.  Linked publications from a single trial: a thread of evidence 
Trials  2014;15(1):369.
doi:10.1186/1745-6215-15-369
PMCID: PMC4183771  PMID: 25248292
12.  Identifying patients with undetected pancreatic cancer in primary care: an independent and external validation of QCancer® (Pancreas) 
The British Journal of General Practice  2013;63(614):e636-e642.
Background
Despite its rarity, the prognosis of pancreatic cancer is very poor and it is a major cause of cancer mortality; being ranked fourth in the world, it has one of the worst survival rates of any cancer.
Aim
To evaluate the performance of QCancer® (Pancreas) for predicting the absolute risk of pancreatic cancer in an independent UK cohort of patients, from general practice records.
Design and setting
Prospective cohort study to evaluate the performance QCancer (Pancreas) prediction models in 364 practices from the UK, contributing to The Health Improvement Network (THIN) database.
Method
Records were extracted from the THIN database for 2.15 million patients registered with a general practice surgery between 1 January 2000 and 30 June 2008, aged 30–84 years (3.74 million person-years), with 618 pancreatic cancer cases. Pancreatic cancer was defined as incident diagnosis of pancreatic cancer during the 2 years after study entry.
Results
The results from this independent and external validation of QCancer (Pancreas) demonstrated good performance data on a large cohort of general practice patients. QCancer (Pancreas) had very good discrimination properties, with areas under the receiver operating characteristic curve of 0.89 and 0.92 for females and males respectively. QCancer (Pancreas) explained 60% and 67% of the variation in females and males respectively. QCancer (Pancreas) over-predicted risk in both females and males, notably in older patients.
Conclusion
QCancer (Pancreas) is potentially useful for identifying undetected cases of pancreatic cancer in primary care in the UK.
doi:10.3399/bjgp13X671623
PMCID: PMC3750803  PMID: 23998844
pancreatic cancer; primary care; risk prediction; validation
13.  The CARE Guidelines: Consensus-based Clinical Case Reporting Guideline Development 
Background:
A case report is a narrative that describes, for medical, scientific, or educational purposes, a medical problem experienced by one or more patients. Case reports written without guidance from reporting standards are insufficiently rigorous to guide clinical practice or to inform clinical study design.
Primary Objective:
Develop, disseminate, and implement systematic reporting guidelines for case reports.
Methods:
We used a three-phase consensus process consisting of (1) premeeting literature review and interviews to generate items for the reporting guidelines, (2) a face-to-face consensus meeting to draft the reporting guidelines, and (3) postmeeting feedback, review, and pilot testing, followed by finalization of the case report guidelines.
Results:
This consensus process involved 27 participants and resulted in a 13-item checklist—a reporting guideline for case reports. The primary items of the checklist are title, key words, abstract, introduction, patient information, clinical findings, timeline, diagnostic assessment, therapeutic interventions, follow-up and outcomes, discussion, patient perspective, and informed consent.
Conclusions:
We believe the implementation of the CARE (CAse REport) guidelines by medical journals will improve the completeness and transparency of published case reports and that the systematic aggregation of information from case reports will inform clinical study design, provide early signals of effectiveness and harms, and improve healthcare delivery.
doi:10.7453/gahmj.2013.008
PMCID: PMC3833570  PMID: 24416692
Case report; case study; EQUATOR Network; patient reports; meaningful use; health research reporting guidelines
15.  Human papillomavirus testing by self-sampling: assessment of accuracy in an unsupervised clinical setting 
Journal of Medical Screening  2007;14(1):34-42.
Objectives: To compare the performance and acceptability of unsupervised self-sampling with clinician sampling for high-risk human papillomavirus (HPV) types for the first time in a UK screening setting.
Setting: Nine hundred and twenty women, from two demographically different centres, attending for routine cervical smear testing
Methods: Women performed an unsupervised HPV self-test. Immediately afterwards, a doctor or nurse took an HPV test and cervical smear. Women with an abnormality on any test were offered colposcopy.
Results: Twenty-one high-grade and 39 low-grade cervical intraepithelial neoplasias (CINs) were detected. The sensitivity for high-grade disease (CIN2+) for the self HPV test was 81% (95% confidence interval [CI] 60–92), clinician HPV test 100% (95% CI 85–100), cytology 81% (95% CI 60–92). The sensitivity of both HPV tests to detect high- and low-grade cervical neoplasia was much higher than that of cytology (self-test 77% [95%CI 65–86], clinician test 80% [95% CI 68–88], cytology 48% [95% CI 36–61]). For both high-grade alone, and high and low grades together, the specificity was significantly higher for cytology (greater than 95%) than either HPV test (between 82% and 87%). The self-test proved highly acceptable to women and they reported that the instructions were easy to understand irrespective of educational level.
Conclusions: Our results suggest that it would be reasonable to offer HPV self-testing to women who are reluctant to attend for cervical smears. This approach should now be directly evaluated among women who have been non-attenders in a cervical screening programme.
doi:10.1258/096914107780154486
PMCID: PMC4109399  PMID: 17362570
16.  The COMET Initiative database: progress and activities from 2011 to 2013 
Trials  2014;15:279.
The Core Outcome Measures in Effectiveness Trials (COMET) Initiative database is an international repository of studies relevant to the development of core outcome sets. By the end of 2013, it included a unique collection of 306 studies. The website is increasingly being used, with more than 12,000 visits in 2013 (a 55% increase over 2012), 8,369 unique visitors (a 53% increase) and 6,844 new visitors (a 48% increase). There has been a rise in visits from outside the United Kingdom, with 2,405 such visits in 2013 (30% of all visits). By December 2013, a total of 4,205 searches had been completed, with 2,139 in 2013 alone.
doi:10.1186/1745-6215-15-279
PMCID: PMC4107994  PMID: 25012001
Core outcome set; Database; Resources
17.  Improving the Transparency of Prognosis Research: The Role of Reporting, Data Sharing, Registration, and Protocols 
PLoS Medicine  2014;11(7):e1001671.
George Peat and colleagues review and discuss current approaches to transparency and published debates and concerns about efforts to standardize prognosis research practice, and make five recommendations.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001671
PMCID: PMC4086727  PMID: 25003600
18.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
19.  Sharing Individual Participant Data from Clinical Trials: An Opinion Survey Regarding the Establishment of a Central Repository 
PLoS ONE  2014;9(5):e97886.
Background
Calls have been made for increased access to individual participant data (IPD) from clinical trials, to ensure that complete evidence is available. However, despite the obvious benefits, progress towards this is frustratingly slow. In the meantime, many systematic reviews have already collected IPD from clinical trials. We propose that a central repository for these IPD should be established to ensure that these datasets are safeguarded and made available for use by others, building on the strengths and advantages of the collaborative groups that have been brought together in developing the datasets.
Objective
Evaluate the level of support, and identify major issues, for establishing a central repository of IPD.
Design
On-line survey with email reminders.
Participants
71 reviewers affiliated with the Cochrane Collaboration's IPD Meta-analysis Methods Group were invited to participate.
Results
30 (42%) invitees responded: 28 (93%) had been involved in an IPD review and 24 (80%) had been involved in a randomised trial. 25 (83%) agreed that a central repository was a good idea and 25 (83%) agreed that they would provide their IPD for central storage. Several benefits of a central repository were noted: safeguarding and standardisation of data, increased efficiency of IPD meta-analyses, knowledge advancement, and facilitating future clinical, and methodological research. The main concerns were gaining permission from trial data owners, uncertainty about the purpose of the repository, potential resource implications, and increased workload for IPD reviewers. Restricted access requiring approval, data security, anonymisation of data, and oversight committees were highlighted as issues under governance of the repository.
Conclusion
There is support in this community of IPD reviewers, many of whom are also involved in clinical trials, for storing IPD in a central repository. Results from this survey are informing further work on developing a repository of IPD which is currently underway by our group.
doi:10.1371/journal.pone.0097886
PMCID: PMC4038514  PMID: 24874700
20.  Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review 
PLoS Medicine  2014;11(5):e1001645.
Jonathan Cook and colleagues systematically reviewed the literature for methods of determining the target difference for use in calculating the necessary sample size for clinical trials, and discuss which methods are best for various types of trials.
Please see later in the article for the Editors' Summary
Background
Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation.
Methods and Findings
A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size.
Conclusions
A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A clinical trial is a research study in which human volunteers are randomized to receive a given intervention or not, and outcomes are measured in both groups to determine the effect of the intervention. Randomized controlled trials (RCTs) are widely accepted as the preferred study design because by randomly assigning participants to groups, any differences between the two groups, other than the intervention under study, are due to chance. To conduct a RCT, investigators calculate how many patients they need to enroll to determine whether the intervention is effective. The number of patients they need to enroll depends on how effective the intervention is expected to be, or would need to be in order to be clinically important. The assumed difference between the two groups is the target difference. A larger target difference generally means that fewer patients need to be enrolled, relative to a smaller target difference. The target difference and number of patients enrolled contribute to the study's statistical precision, and the ability of the study to determine whether the intervention is effective. Selecting an appropriate target difference is important from both a scientific and ethical standpoint.
Why Was This Study Done?
There are several ways to determine an appropriate target difference. The authors wanted to determine what methods for specifying the target difference are available and when they can be used.
What Did the Researchers Do and Find?
To identify studies that used a method for determining an important and/or realistic difference, the investigators systematically surveyed the research literature. Two reviewers screened each of the abstracts chosen, and a third reviewer was consulted if necessary. The authors identified seven methods to determine target differences. They evaluated the studies to establish similarities and differences of each application. Points about the strengths and limitations of the method and how frequently the method was chosen were also noted.
What Do these Findings Mean?
The study draws attention to an understudied but important part of designing a clinical trial. Enrolling the right number of patients is very important—too few patients and the study may not be able to answer the study question; too many and the study will be more expensive and more difficult to conduct, and will unnecessarily expose more patients to any study risks. The target difference may also be helpful in interpreting the results of the trial. The authors discuss the pros and cons of different ways to calculate target differences and which methods are best for which types of studies, to help inform researchers designing such studies.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001645.
Wikipedia has an entry on sample size determination that discusses the factors that influence sample size calculation, including the target difference and the statistical power of a study (statistical power is the ability of a study to find a difference between treatments when a true difference exists). (Note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages.)
The University of Ottawa has an article that explains how different factors influence the power of a study
doi:10.1371/journal.pmed.1001645
PMCID: PMC4019477  PMID: 24824338
21.  Evaluation of the Cochrane Collaboration’s tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation 
Systematic Reviews  2014;3:37.
Background
In 2008, the Cochrane Collaboration introduced a tool for assessing the risk of bias in clinical trials included in Cochrane reviews. The risk of bias (RoB) tool is based on narrative descriptions of evidence-based methodological features known to increase the risk of bias in trials.
Methods
To assess the usability of this tool, we conducted an evaluation by means of focus groups, online surveys and a face-to-face meeting. We obtained feedback from a range of stakeholders within The Cochrane Collaboration regarding their experiences with, and perceptions of, the RoB tool and associated guidance materials. We then assessed this feedback in a face-to-face meeting of experts and stakeholders and made recommendations for improvements and further developments of the RoB tool.
Results
The survey attracted 380 responses. Respondents reported taking an average of between 10 and 60 minutes per study to complete their RoB assessments, which 83% deemed acceptable. Most respondents (87% of authors and 95% of editorial staff) thought RoB assessments were an improvement over past approaches to trial quality assessment. Most authors liked the standardized approach (81%) and the ability to provide quotes to support judgements (74%). A third of participants disliked the increased workload and found the wording describing RoB judgements confusing. The RoB domains reported to be the most difficult to assess were incomplete outcome data and selective reporting of outcomes. Authors expressed the need for more guidance on how to incorporate RoB assessments into meta-analyses and review conclusions. Based on this evaluation, recommendations were made for improvements to the RoB tool and the associated guidance. The implementation of these recommendations is currently underway.
Conclusions
Overall, respondents identified positive experiences and perceptions of the RoB tool. Revisions of the tool and associated guidance made in response to this evaluation, and improved provision of training, may improve implementation.
doi:10.1186/2046-4053-3-37
PMCID: PMC4022341  PMID: 24731537
Survey; Focus groups; Bias assessment; Quality assessment; Systematic reviews
22.  The Quality of Reporting Methods and Results in Network Meta-Analyses: An Overview of Reviews and Suggestions for Improvement 
PLoS ONE  2014;9(3):e92508.
Introduction
Some have suggested the quality of reporting of network meta-analyses (a technique used to synthesize information to compare multiple interventions) is sub-optimal. We sought to review information addressing this claim.
Objective
To conduct an overview of existing evaluations of quality of reporting in network meta-analyses and indirect treatment comparisons, and to compile a list of topics which may require detailed reporting guidance to enhance future reporting quality.
Methods
An electronic search of Medline and the Cochrane Registry of methodologic studies (January 2004–August 2013) was performed by an information specialist. Studies describing findings from quality of reporting assessments were sought. Screening of abstracts and full texts was performed by two team members. Descriptors related to all aspects of reporting a network meta-analysis were summarized.
Results
We included eight reports exploring the quality of reporting of network meta-analyses. From past reviews, authors found several aspects of network meta-analyses were inadequately reported, including primary information about literature searching, study selection, and risk of bias evaluations; statement of the underlying assumptions for network meta-analysis, as well as efforts to verify their validity; details of statistical models used for analyses (including information for both Bayesian and Frequentist approaches); completeness of reporting of findings; and approaches for summarizing probability measures as additional important considerations.
Conclusions
While few studies were identified, several deficiencies in the current reporting of network meta-analyses were observed. These findings reinforce the need to develop reporting guidance for network meta-analyses. Findings from this review will be used to guide next steps in the development of reporting guidance for network meta-analysis in the format of an extension of the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analysis) Statement.
doi:10.1371/journal.pone.0092508
PMCID: PMC3966807  PMID: 24671099
24.  External validation of multivariable prediction models: a systematic review of methodological conduct and reporting 
Background
Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models.
Methods
We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures.
Results
11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models.
Conclusions
The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling and acknowledgement of missing data and one of the most key performance measures of prediction models i.e. calibration often omitted from the publication. It may therefore not be surprising that an overwhelming majority of developed prediction models are not used in practice, when there is a dearth of well-conducted and clearly reported (external validation) studies describing their performance on independent participant data.
doi:10.1186/1471-2288-14-40
PMCID: PMC3999945  PMID: 24645774
25.  Patients' & Healthcare Professionals' Values Regarding True- & False-Positive Diagnosis when Colorectal Cancer Screening by CT Colonography: Discrete Choice Experiment 
PLoS ONE  2013;8(12):e80767.
Purpose
To establish the relative weighting given by patients and healthcare professionals to gains in diagnostic sensitivity versus loss of specificity when using CT colonography (CTC) for colorectal cancer screening.
Materials and Methods
Following ethical approval and informed consent, 75 patients and 50 healthcare professionals undertook a discrete choice experiment in which they chose between “standard” CTC and “enhanced” CTC that raised diagnostic sensitivity 10% for either cancer or polyps in exchange for varying levels of specificity. We established the relative increase in false-positive diagnoses participants traded for an increase in true-positive diagnoses.
Results
Data from 122 participants were analysed. There were 30 (25%) non-traders for the cancer scenario and 20 (16%) for the polyp scenario. For cancer, the 10% gain in sensitivity was traded up to a median 45% (IQR 25 to >85) drop in specificity, equating to 2250 (IQR 1250 to >4250) additional false-positives per additional true-positive cancer, at 0.2% prevalence. For polyps, the figure was 15% (IQR 7.5 to 55), equating to 6 (IQR 3 to 22) additional false-positives per additional true-positive polyp, at 25% prevalence. Tipping points were significantly higher for patients than professionals for both cancer (85 vs 25, p<0.001) and polyps (55 vs 15, p<0.001). Patients were willing to pay significantly more for increased sensitivity for cancer (p = 0.021).
Conclusion
When screening for colorectal cancer, patients and professionals believe gains in true-positive diagnoses are worth much more than the negative consequences of a corresponding rise in false-positives. Evaluation of screening tests should account for this.
doi:10.1371/journal.pone.0080767
PMCID: PMC3857178  PMID: 24349014

Results 1-25 (198)