PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1313971)

Clipboard (0)
None

Related Articles

1.  Strengthening the reporting of genetic risk prediction studies: the GRIPS statement 
Genome Medicine  2011;3(3):16.
The rapid and continuing progress in gene discovery for complex diseases is fueling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but the quality and completeness of reporting varies. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of genetic risk prediction studies (the GRIPS statement), building on the principles established by prior reporting guidelines. These recommendations aim to enhance the transparency of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct, or analysis. A detailed Explanation and Elaboration document is published at http://www.plosmedicine.org.
doi:10.1186/gm230
PMCID: PMC3092101  PMID: 21410995
2.  Strengthening the reporting of genetic risk prediction studies: the GRIPS statement 
European Journal of Epidemiology  2011;26(4):255-259.
The rapid and continuing progress in gene discovery for complex diseases is fueling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but the quality and completeness of reporting varies. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines. These recommendations aim to enhance the transparency of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct, or analysis. A detailed Explanation and Elaboration document is published.
doi:10.1007/s10654-011-9552-y
PMCID: PMC3088799  PMID: 21431409
Genetic; Risk prediction; Methodology; Guidelines; Reporting
3.  Strengthening the reporting of genetic risk prediction studies (GRIPS): explanation and elaboration 
European Journal of Epidemiology  2011;26(4):313-337.
The rapid and continuing progress in gene discovery for complex diseases is fuelling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality. Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines. These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
doi:10.1007/s10654-011-9551-z
PMCID: PMC3088812  PMID: 21424820
Genetic; Risk prediction; Methodology; Guidelines; Reporting
4.  Strengthening the Reporting of Genetic Risk Prediction Studies (GRIPS): Explanation and Elaboration 
European journal of epidemiology  2011;26(4):313-337.
The rapid and continuing progress in gene discovery for complex diseases is fuelling interest in the potential application of genetic risk models for clinical and public health practice.The number of studies assessing the predictive ability is steadily increasing, but they vary widely in completeness of reporting and apparent quality.Transparent reporting of the strengths and weaknesses of these studies is important to facilitate the accumulation of evidence on genetic risk prediction.A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies (GRIPS), building on the principles established by prior reporting guidelines.These recommendations aim to enhance the transparency, quality and completeness of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct or analysis.
doi:10.1007/s10654-011-9551-z
PMCID: PMC3088812  PMID: 21424820
5.  Strengthening the reporting of genetic risk prediction studies: the GRIPS statement 
The rapid and continuing progress in gene discovery for complex diseases is fueling interest in the potential application of genetic risk models for clinical and public health practice. The number of studies assessing the predictive ability is steadily increasing, but the quality and completeness of reporting varies. A multidisciplinary workshop sponsored by the Human Genome Epidemiology Network developed a checklist of 25 items recommended for strengthening the reporting of Genetic RIsk Prediction Studies, building on the principles established by previous reporting guidelines. These recommendations aim to enhance the transparency of study reporting, and thereby to improve the synthesis and application of information from multiple studies that might differ in design, conduct, or analysis. A detailed Explanation and Elaboration document is published on the EJHG website.
doi:10.1038/ejhg.2011.25
PMCID: PMC3172920  PMID: 21407265
6.  Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement 
BMC Medicine  2015;13:1.
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
Editors’ note: In order to encourage dissemination of the TRIPOD Statement, this article is freely accessible on the Annals of Internal Medicine Web site (www.annals.org) and will be also published in BJOG, British Journal of Cancer, British Journal of Surgery, BMC Medicine, British Medical Journal, Circulation, Diabetic Medicine, European Journal of Clinical Investigation, European Urology, and Journal of Clinical Epidemiology. The authors jointly hold the copyright of this article. An accompanying Explanation and Elaboration article is freely available only on www.annals.org; Annals of Internal Medicine holds copyright for that article.
doi:10.1186/s12916-014-0241-z
PMCID: PMC4284921  PMID: 25563062
Prediction models; Prognostic; Diagnostic; Model development; Validation; Transparency; Reporting
7.  Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) 
Circulation  2015;131(2):211-219.
Background—
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed.
Methods—
The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors.
Results—
The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document.
Conclusions—
To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).
doi:10.1161/CIRCULATIONAHA.114.014508
PMCID: PMC4297220  PMID: 25561516
diagnosis; epidemiology; prognosis; research design; risk; statistics
8.  Reporting and Methods in Clinical Prediction Research: A Systematic Review 
PLoS Medicine  2012;9(5):e1001221.
Walter Bouwmeester and colleagues investigated the reporting and methods of prediction studies in 2008, in six high-impact general medical journals, and found that the majority of prediction studies do not follow current methodological recommendations.
Background
We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.
Methods and Findings
We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.
Conclusions
The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
There are often times in our lives when we would like to be able to predict the future. Is the stock market going to go up, for example, or will it rain tomorrow? Being able predict future health is also important, both to patients and to physicians, and there is an increasing body of published clinical “prediction research.” Diagnostic prediction research investigates the ability of variables or test results to predict the presence or absence of a specific diagnosis. So, for example, one recent study compared the ability of two imaging techniques to diagnose pulmonary embolism (a blood clot in the lungs). Prognostic prediction research investigates the ability of various markers to predict future outcomes such as the risk of a heart attack. Both types of prediction research can investigate the predictive properties of patient characteristics, single variables, tests, or markers, or combinations of variables, tests, or markers (multivariable studies). Both types of prediction research can include also studies that build multivariable prediction models to guide patient management (model development), or that test the performance of models (validation), or that quantify the effect of using a prediction model on patient and physician behaviors and outcomes (impact assessment).
Why Was This Study Done?
With the increase in prediction research, there is an increased interest in the methodology of this type of research because poorly done or poorly reported prediction research is likely to have limited reliability and applicability and will, therefore, be of little use in patient management. In this systematic review, the researchers investigate the reporting and methods of prediction studies by examining the aims, design, participant selection, definition and measurement of outcomes and candidate predictors, statistical power and analyses, and performance measures included in multivariable prediction research articles published in 2008 in several general medical journals. In a systematic review, researchers identify all the studies undertaken on a given topic using a predefined set of criteria and systematically analyze the reported methods and results of these studies.
What Did the Researchers Do and Find?
The researchers identified all the multivariable prediction studies meeting their predefined criteria that were published in 2008 in six high impact general medical journals by browsing through all the issues of the journals (a hand search). They then scored the methods and reporting of each study using a comprehensive item list based on recent recommendations for the conduct of prediction research (for example, the reporting recommendations for tumor marker prognostic studies—the REMARK guidelines). Of 71 retrieved studies, 51 were predictor finding studies, 14 were prediction model development studies, three externally validated an existing model, and three reported on a model's impact on participant outcome. Study design, participant selection, definitions of outcomes and predictors, and predictor selection were generally well reported, but other methodological and reporting aspects of the studies were suboptimal. For example, despite many recommendations, continuous predictors were often dichotomized. That is, rather than using the measured value of a variable in a prediction model (for example, blood pressure in a cardiovascular disease prediction model), measurements were frequently assigned to two broad categories. Similarly, many of the studies failed to adequately estimate the sample size needed to minimize bias in predictor effects, and few of the model development papers quantified and validated the proposed model's predictive performance.
What Do These Findings Mean?
These findings indicate that, in 2008, most of the prediction research published in high impact general medical journals failed to follow current guidelines for the conduct and reporting of clinical prediction studies. Because the studies examined here were published in high impact medical journals, they are likely to be representative of the higher quality studies published in 2008. However, reporting standards may have improved since 2008, and the conduct of prediction research may actually be better than this analysis suggests because the length restrictions that are often applied to journal articles may account for some of reporting omissions. Nevertheless, despite some encouraging findings, the researchers conclude that the poor reporting and poor methods they found in many published prediction studies is a cause for concern and is likely to limit the reliability and applicability of this type of clinical research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001221.
The EQUATOR Network is an international initiative that seeks to improve the reliability and value of medical research literature by promoting transparent and accurate reporting of research studies; its website includes information on a wide range of reporting guidelines including the REMARK recommendations (in English and Spanish)
A video of a presentation by Doug Altman, one of the researchers of this study, on improving the reporting standards of the medical evidence base, is available
The Cochrane Prognosis Methods Group provides additional information on the methodology of prognostic research
doi:10.1371/journal.pmed.1001221
PMCID: PMC3358324  PMID: 22629234
9.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies 
PLoS Medicine  2007;4(10):e296.
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the Web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.
This paper describes the recommendations of The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative on what should be included in an accurate and complete report of an observational study.
doi:10.1371/journal.pmed.0040296
PMCID: PMC2020495  PMID: 17941714
10.  Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin 
PLoS Medicine  2013;10(1):e1001378.
Using documents obtained through litigation, S. Swaroop Vedula and colleagues compared internal company documents regarding industry-sponsored trials of off-label uses of gabapentin with the published trial reports and find discrepancies in reporting of analyses.
Background
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Conclusions
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
To be credible, published research must present an unbiased, transparent, and accurate description of the study methods and findings so that readers can assess all relevant information to make informed decisions about the impact of any conclusions. Therefore, research publications should conform to universally adopted guidelines and checklists. Studies to establish whether a treatment is effective, termed randomized controlled trials (RCTs), are checked against a comprehensive set of guidelines: The robustness of trial protocols are measured through the Standard Protocol Items for Randomized Trials (SPIRIT), and the Consolidated Standards of Reporting Trials (CONSORT) statement (which was constructed and agreed by a meeting of journal editors in 1996, and has been updated over the years) includes a 25-point checklist that covers all of the key points in reporting RCTs.
Why Was This Study Done?
Although the CONSORT statement has helped improve transparency in the reporting of the methods and findings from RCTs, the statement does not define how certain types of analyses should be conducted and which patients should be included in the analyses, for example, in an intention-to-treat analysis (in which all participants are included in the data analysis of the group to which they were assigned, whether or not they completed the intervention given to the group). So in this study, the researchers used internal company documents released in the course of litigation against the pharmaceutical company Pfizer regarding the drug gabapentin, to compare between the internal and published reports the reporting of the numbers of participants, the description of the types of analyses, and the definitions of each type of analysis. The reports involved studies of gabapentin used for medical reasons not approved for marketing by the US Food and Drug Administration, known as “off-label” uses.
What Did the Researchers Do and Find?
The researchers identified trials sponsored by Pfizer relating to four off-label uses of gabapentin and examined the internal company protocols, statistical analysis plans, research reports, and the main publications related to each trial. The researchers then compared the numbers of participants randomized and analyzed for the main (primary) outcome and the type of analysis for efficacy and safety in both the internal research report and the trial publication. The researchers identified 21 trials, 11 of which were published RCTs that had the associated documents necessary for comparison.
The researchers found that in three out of ten trials there were differences in the internal research report and the main publication regarding the number of randomized participants. Furthermore, in six out of ten trials, the researchers were unable to compare the internal research report with the main publication for the number of participants analyzed for efficacy, because the research report either did not describe the primary outcome or did not describe the type of analysis. Overall, the researchers found that seven different types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including intention-to-treat analysis. However, the protocol or publication used six different descriptions for the intention-to-treat analysis, resulting in several important differences between the internal and published documents about the number of patients included in the analysis.
What Do These Findings Mean?
These findings from a sample of industry-sponsored trials on the off-label use of gabapentin suggest that when compared to the internal research reports, the trial publications did not always accurately reflect what was actually done in the trial. Therefore, the trial publication could not be considered to be an accurate and transparent record of the numbers of participants randomized and analyzed for efficacy. These findings support the need for further revisions of the CONSORT statement, such as including explicit statements about the criteria used to define each type of analysis and the numbers of participants excluded from each type of analysis. Further guidance is also needed to ensure consistent terminology for types of analysis. Of course, these revisions will improve reporting only if authors and journals adhere to them. These findings also highlight the need for all individual patient data to be made accessible to readers of the published article.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001378.
For more information, see the CONSORT statement website
The EQUATOR Network website is a resource center for the good reporting of health research studies and has more information about the SPIRIT initiative and the CONSORT statement
doi:10.1371/journal.pmed.1001378
PMCID: PMC3558476  PMID: 23382656
11.  Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices 
PLoS Medicine  2011;8(8):e1001069.
Carol Bennett and colleagues review the evidence and find that there is limited guidance and no consensus on the optimal reporting of survey research.
Background
Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.
Methods and Findings
We conducted a three-part project: (1) a systematic review of the literature (including “Instructions to Authors” from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n = 165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).
Conclusions
There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Surveys, or questionnaires, are an essential component of many types of research, including health, and usually gather information by asking a sample of people questions on a specific topic and then generalizing the results to a larger population. Surveys are especially important when addressing topics that are difficult to assess using other approaches and usually rely on self reporting, for example self-reported behaviors, such as eating habits, satisfaction, beliefs, knowledge, attitudes, opinions. However, the methods used in conducting survey research can significantly affect the reliability, validity, and generalizability of study results, and without clear reporting of the methods used in surveys, it is difficult or impossible to assess these characteristics and therefore to have confidence in the findings.
Why Was This Study Done?
This uncertainty in other forms of research has given rise to Reporting Guidelines—evidence-based, validated tools that aim to improve the reporting quality of health research. The STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) Statement includes cross-sectional studies, which often involve surveys. But not all surveys are epidemiological, and STROBE does not include methods' and results' reporting characteristics that are unique to surveys. Therefore, the researchers conducted this study to help determine whether there is a need for a reporting guideline for health survey research.
What Did the Researchers Do and Find?
The researchers identified any previous relevant guidance for survey research, and any evidence on the quality of reporting of survey research, by: reviewing current guidance for reporting survey research in the “Instructions to Authors” of leading medical journals and in published literature; conducting a systematic review of evidence on the quality of reporting of surveys; identifying key quality criteria for the conduct of survey research; and finally, reviewing how these criteria are currently reported by conducting a review of recently published reports of self-administered surveys.
The researchers found that 154 of the 165 journals searched (93.3%) did not provide any guidance on survey reporting, even though the majority (81.8%) have published survey research. Only three of the 11 journals that provided some guidance gave more than one directive or statement. Five papers and one Internet site provided guidance on the reporting of survey research, but none used validated measures or explicit methods for development. The researchers identified eight papers that addressed the quality of reporting of some aspect of survey research: the reporting of response rates; the reporting of non-response analyses in survey research; and the degree to which authors make their survey instrument available to readers. In their review of 117 published survey studies, the researchers found that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%). Furthermore, (88 [75%]) did not include any information on consent procedures for research participants, and one-third (40 [34%]) of papers did not report whether the study had received research ethics board review.
What Do These Findings Mean?
Overall, these results show that guidance is limited and consensus lacking about the optimal reporting of survey research, and they highlight the need for a well-developed reporting guideline specifically for survey research—possibly an extension of the guideline for observational studies in epidemiology (STROBE)—that will provide the structure to ensure more complete reporting and allow clearer review and interpretation of the results from surveys.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001069.
More than 100 reporting guidelines covering a broad spectrum of research types are indexed on the EQUATOR Networks web site
More information about STROBE is available on the STROBE Statement web site
doi:10.1371/journal.pmed.1001069
PMCID: PMC3149080  PMID: 21829330
12.  The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration 
PLoS Medicine  2009;6(7):e1000100.
Alessandro Liberati and colleagues present an Explanation and Elaboration of the PRISMA Statement, updated guidelines for the reporting of systematic reviews and meta-analyses.
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.
Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.
The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
doi:10.1371/journal.pmed.1000100
PMCID: PMC2707010  PMID: 19621070
13.  How Evidence-Based Are the Recommendations in Evidence-Based Guidelines? 
PLoS Medicine  2007;4(8):e250.
Background
Treatment recommendations for the same condition from different guideline bodies often disagree, even when the same randomized controlled trial (RCT) evidence is cited. Guideline appraisal tools focus on methodology and quality of reporting, but not on the nature of the supporting evidence. This study was done to evaluate the quality of the evidence (based on consideration of its internal validity, clinical relevance, and applicability) underlying therapy recommendations in evidence-based clinical practice guidelines.
Methods and Findings
A cross-sectional analysis of cardiovascular risk management recommendations was performed for three different conditions (diabetes mellitus, dyslipidemia, and hypertension) from three pan-national guideline panels (from the United States, Canada, and Europe). Of the 338 treatment recommendations in these nine guidelines, 231 (68%) cited RCT evidence but only 105 (45%) of these RCT-based recommendations were based on high-quality evidence. RCT-based evidence was downgraded most often because of reservations about the applicability of the RCT to the populations specified in the guideline recommendation (64/126 cases, 51%) or because the RCT reported surrogate outcomes (59/126 cases, 47%).
Conclusions
The results of internally valid RCTs may not be applicable to the populations, interventions, or outcomes specified in a guideline recommendation and therefore should not always be assumed to provide high-quality evidence for therapy recommendations.
From an analysis of cardiovascular risk-management recommendations in guidelines produced by pan-national panels, McAlister and colleagues concluded that fewer than half were based on high-quality evidence.
Editors' Summary
Background.
Until recently, doctors largely relied on their own experience to choose the best treatment for their patients. Faced with a patient with high blood pressure (hypertension), for example, the doctor had to decide whether to recommend lifestyle changes or to prescribe drugs to reduce the blood pressure. If he or she chose the latter, he or she then had to decide which drug to prescribe, set a target blood pressure, and decide how long to wait before changing the prescription if this target was not reached. But, over the past decade, numerous clinical practice guidelines have been produced by governmental bodies and medical associations to help doctors make treatment decisions like these. For each guideline, experts have searched the medical literature for the current evidence about the diagnosis and treatment of a disease, evaluated the quality of that evidence, and then made recommendations based on the best evidence available.
Why Was This Study Done?
The recommendations made in different clinical practice guidelines vary, in part because they are based on evidence of varying quality. To help clinicians decide which recommendations to follow, some guidelines indicate the strength of their recommendations by grading them, based on the methods used to collect the underlying evidence. Thus, a randomized clinical trial (RCT)—one in which patients are randomly allocated to different treatments without the patient or clinician knowing the allocation—provides higher-quality evidence than a nonrandomized trial. Similarly, internally valid trials—in which the differences between patient groups are solely due to their different treatments and not to other aspects of the trial—provide high-quality evidence. However, grading schemes rarely consider the size of studies and whether they have focused on clinical or so-called “surrogate” measures. (For example, an RCT of a treatment to reduce heart or circulation [“cardiovascular”] problems caused by high blood pressure might have death rate as a clinical measure; a surrogate endpoint would be blood pressure reduction.) Most guidelines also do not consider how generalizable (applicable) the results of a trial are to the populations, interventions, and outcomes specified in the guideline recommendation. In this study, the researchers have investigated the quality of the evidence underlying recommendations for cardiovascular risk management in nine evidence-based clinical practice guides using these additional criteria.
What Did the Researchers Do and Find?
The researchers extracted the recommendations for managing cardiovascular risk from the current US, Canadian, and European guidelines for the management of diabetes, abnormal blood lipid levels (dyslipidemia), and hypertension. They graded the quality of evidence for each recommendation using the Canadian Hypertension Education Program (CHEP) grading scheme, which considers the type of study, its internal validity, its clinical relevance, and how generally applicable the evidence is considered to be. Of 338 evidence-based recommendations, two-thirds were based on evidence collected in internally valid RCTs, but only half of these RCT-based recommendations were based on high-quality evidence. The evidence underlying 64 of the guideline recommendations failed to achieve a high CHEP grade because the RCT data were collected in a population of people with different characteristics to those covered by the guideline. For example, a recommendation to use spironolactone to reduce blood pressure in people with hypertension was based on an RCT in which the participants initially had congestive heart failure with normal blood pressure. Another 59 recommendations were downgraded because they were based on evidence from RCTs that had not focused on clinical measures of effectiveness.
What Do These Findings Mean?
These findings indicate that although most of the recommendations for cardiovascular risk management therapies in the selected guidelines were based on evidence collected in internally valid RCTs, less than one-third were based on high-quality evidence applicable to the populations, treatments, and outcomes specified in guideline recommendations. A limitation of this study is that it analyzed a subset of recommendations in only a few guidelines. Nevertheless, the findings serve to warn clinicians that evidence-based guidelines are not necessarily based on high-quality evidence. In addition, they emphasize the need to make the evidence base underlying guideline recommendations more transparent by using an extended grading system like the CHEP scheme. If this were done, the researchers suggest, it would help clinicians apply guideline recommendations appropriately to their individual patients.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040250.
• Wikipedia contains pages on evidence-based medicine and on clinical practice guidelines (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
• The National Guideline Clearinghouse provides information on US national guidelines
• The Guidelines International Network promotes the systematic development and application of clinical practice guidelines
• Information is available on the Canadian Hypertension Education Program (CHEP) (in French and English)
• See information on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group, an organization that has developed an grading scheme similar to the CHEP scheme (in English, Spanish, French, German, and Italian)
doi:10.1371/journal.pmed.0040250
PMCID: PMC1939859  PMID: 17683197
14.  The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration 
Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.
Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.
The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
doi:10.1136/bmj.b2700
PMCID: PMC2714672  PMID: 19622552
15.  SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials 
High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol.
This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (www.spirit-statement.org).
The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.
doi:10.1136/bmj.e7586
PMCID: PMC3541470  PMID: 23303884
16.  Reporting recommendations for tumor marker prognostic studies (REMARK): explanation and elaboration 
BMC Medicine  2012;10:51.
Background
The Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK) checklist consists of 20 items to report for published tumor marker prognostic studies. It was developed to address widespread deficiencies in the reporting of such studies. In this paper we expand on the REMARK checklist to enhance its use and effectiveness through better understanding of the intent of each item and why the information is important to report.
Methods
REMARK recommends including a transparent and full description of research goals and hypotheses, subject selection, specimen and assay considerations, marker measurement methods, statistical design and analysis, and study results. Each checklist item is explained and accompanied by published examples of good reporting, and relevant empirical evidence of the quality of reporting. We give prominence to discussion of the 'REMARK profile', a suggested tabular format for summarizing key study details.
Summary
The paper provides a comprehensive overview to educate on good reporting and provide a valuable reference for the many issues to consider when designing, conducting, and analyzing tumor marker studies and prognostic studies in medicine in general.
To encourage dissemination of the Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK): Explanation and Elaboration, this article has also been published in PLoS Medicine.
doi:10.1186/1741-7015-10-51
PMCID: PMC3362748  PMID: 22642691
17.  STrengthening the REporting of Genetic Association studies (STREGA) – an extension of the STROBE statement 
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information in the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association studies (STREGA) initiative builds on the STrengthening the Reporting of OBservational Studies in Epidemiology (STROBE) Statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modelling haplotype variation, Hardy–Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data and the volume of data issues that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed, but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct or analysis.
doi:10.1111/j.1365-2362.2009.02125.x
PMCID: PMC2730482  PMID: 19297801
Epidemiology; gene-disease associations; gene-environment interaction; genetics; genome-wide association; meta-analysis; reporting recommendations; systematic review
18.  Strengthening the reporting of genetic association studies (STREGA): an extension of the STROBE statement 
Making sense of rapidly evolving evidence on genetic associations is crucial to making genuine advances in human genomics and the eventual integration of this information in the practice of medicine and public health. Assessment of the strengths and weaknesses of this evidence, and hence the ability to synthesize it, has been limited by inadequate reporting of results. The STrengthening the REporting of Genetic Association studies (STREGA) initiative builds on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement and provides additions to 12 of the 22 items on the STROBE checklist. The additions concern population stratification, genotyping errors, modeling haplotype variation, Hardy–Weinberg equilibrium, replication, selection of participants, rationale for choice of genes and variants, treatment effects in studying quantitative traits, statistical methods, relatedness, reporting of descriptive and outcome data, and the volume of data issues that are important to consider in genetic association studies. The STREGA recommendations do not prescribe or dictate how a genetic association study should be designed but seek to enhance the transparency of its reporting, regardless of choices made during design, conduct, or analysis.
doi:10.1007/s10654-008-9302-y
PMCID: PMC2764094  PMID: 19189221
Gene–disease associations; Genetics; Gene–environment interaction; Systematic review; Meta analysis; Reporting recommendations; Epidemiology; Genome-wide association
19.  Uses and misuses of the STROBE statement: bibliographic study 
BMJ Open  2011;1(1):e000048.
Objectives
Appropriate reporting is central to the application of findings from research to clinical practice. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations consist of a checklist of 22 items that provide guidance on the reporting of cohort, case–control and cross-sectional studies, in order to facilitate critical appraisal and interpretation of results. STROBE was published in October 2007 in several journals including The Lancet, BMJ, Annals of Internal Medicine and PLoS Medicine. Within the framework of the revision of the STROBE recommendations, the authors examined the context and circumstances in which the STROBE statement was used in the past.
Design
The authors searched the Web of Science database in August 2010 for articles which cited STROBE and examined a random sample of 100 articles using a standardised, piloted data extraction form. The use of STROBE in observational studies and systematic reviews (including meta-analyses) was classified as appropriate or inappropriate. The use of STROBE to guide the reporting of observational studies was considered appropriate. Inappropriate uses included the use of STROBE as a tool to assess the methodological quality of studies or as a guideline on how to design and conduct studies.
Results
The authors identified 640 articles that cited STROBE. In the random sample of 100 articles, about half were observational studies (32%) or systematic reviews (19%). Comments, editorials and letters accounted for 15%, methodological articles for 8%, and recommendations and narrative reviews for 26% of articles. Of the 32 observational studies, 26 (81%) made appropriate use of STROBE, and three uses (10%) were considered inappropriate. Among 19 systematic reviews, 10 (53%) used STROBE inappropriately as a tool to assess study quality.
Conclusions
The STROBE reporting recommendations are frequently used inappropriately in systematic reviews and meta-analyses as an instrument to assess the methodological quality of observational studies.
Article summary
Article focus
Appropriate reporting is central for the proper application of findings from clinical research into clinical practice.
The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations aim to provide guidance to authors on how to improve the reporting of observational studies to facilitate critical appraisal and interpretation of results.
We examined the reasons for citing STROBE and found that most observational studies used STROBE as a reporting guideline, while about half of systematic reviews used STROBE as a tool to assess the methodological quality of the studies.
Key messages
Our study provides further evidence that authors of systematic reviews inappropriately use reporting guidelines to assess methodological study quality. Given the identified common misuse of STROBE, we discuss possible reasons and potential pitfalls of such misuse.
Strengths and limitations of this study
We conducted a systematic review of the literature to address a relevant and insufficiently discussed issue concerning misuses of reporting guidelines. One of the main concerns of such misuse is the potential introduction of bias into systematic reviews and meta-analysis.
A limitation of our findings is the fact that we included only articles which cited STROBE. This may have resulted in a selection bias, since some researchers may use STROBE in their study and mention it in their manuscript but do not formally cite it.
doi:10.1136/bmjopen-2010-000048
PMCID: PMC3191404  PMID: 22021739
Rheumatology; public health; rehabilitation medicine; epidemiology; systematic reviews; prognostic studies; statistics; research designs in field of test evaluations; heterogeneity; bias; diagnostic accuracy; HIV/AIDS; metaanalysis; social medicine; reporting guideline; methodological study; STROBE; methodological quality; quality assessment
20.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
21.  Evaluating the Quality of Research into a Single Prognostic Biomarker: A Systematic Review and Meta-analysis of 83 Studies of C-Reactive Protein in Stable Coronary Artery Disease 
PLoS Medicine  2010;7(6):e1000286.
In a systematic review and meta-analysis of 83 prognostic studies of C-reactive protein in coronary disease, Hemingway and colleagues find substantial biases, preventing them from drawing clear conclusions relating to the use of this marker in clinical practice.
Background
Systematic evaluations of the quality of research on a single prognostic biomarker are rare. We sought to evaluate the quality of prognostic research evidence for the association of C-reactive protein (CRP) with fatal and nonfatal events among patients with stable coronary disease.
Methods and Findings
We searched MEDLINE (1966 to 2009) and EMBASE (1980 to 2009) and selected prospective studies of patients with stable coronary disease, reporting a relative risk for the association of CRP with death and nonfatal cardiovascular events. We included 83 studies, reporting 61,684 patients and 6,485 outcome events. No study reported a prespecified statistical analysis protocol; only two studies reported the time elapsed (in months or years) between initial presentation of symptomatic coronary disease and inclusion in the study. Studies reported a median of seven items (of 17) from the REMARK reporting guidelines, with no evidence of change over time.
The pooled relative risk for the top versus bottom third of CRP distribution was 1.97 (95% confidence interval [CI] 1.78–2.17), with substantial heterogeneity (I2 = 79.5). Only 13 studies adjusted for conventional risk factors (age, sex, smoking, obesity, diabetes, and low-density lipoprotein [LDL] cholesterol) and these had a relative risk of 1.65 (95% CI 1.39–1.96), I2 = 33.7. Studies reported ten different ways of comparing CRP values, with weaker relative risks for those based on continuous measures. Adjusting for publication bias (for which there was strong evidence, Egger's p<0.001) using a validated method reduced the relative risk to 1.19 (95% CI 1.13–1.25). Only two studies reported a measure of discrimination (c-statistic). In 20 studies the detection rate for subsequent events could be calculated and was 31% for a 10% false positive rate, and the calculated pooled c-statistic was 0.61 (0.57–0.66).
Conclusion
Multiple types of reporting bias, and publication bias, make the magnitude of any independent association between CRP and prognosis among patients with stable coronary disease sufficiently uncertain that no clinical practice recommendations can be made. Publication of prespecified statistical analytic protocols and prospective registration of studies, among other measures, might help improve the quality of prognostic biomarker research.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Coronary artery disease is the leading cause of death among adults in developed countries. With age, fatty deposits called atherosclerotic plaques coat the walls of the arteries, the vessels that carry blood to the body's organs. Because they narrow the arteries, atherosclerotic plaques restrict blood flow. If plaques form in the arteries that feed the heart, the result is coronary artery disease, the symptoms of which include shortness of breath and chest pains (angina). If these symptoms only occur during exertion, the condition is called stable coronary artery disease. Coronary artery disease can cause potentially fatal heart attacks (myocardial infarctions). A heart attack occurs when a plaque ruptures and a blood clot completely blocks the artery, thereby killing part of the heart. Smoking, high blood pressure, high blood levels of cholesterol (a type of fat), diabetes, and being overweight are risk factors for coronary artery disease. Treatments for the condition include lifestyle changes and medications that lower blood pressure and blood cholesterol. Narrowed arteries can also be widened using a device called a stent or surgically bypassed.
Why Was This Study Done?
Clinicians can predict whether a patient with coronary artery disease is likely to have a heart attack by considering their risk factors. They then use this “prognosis” to help them manage the patient. To provide further help for clinicians, researchers are trying to identify prognostic biomarkers (molecules whose blood levels indicate how a disease might develop) for coronary artery disease. However, before a biomarker can be used clinically, it must be properly validated and there are concerns that there is insufficient high quality evidence to validate many biomarkers. In this systematic review and meta-analysis, the researchers ask whether the evidence for an association between blood levels of C-reactive protein (CRP, an inflammatory protein) and subsequent fatal and nonfatal events affecting the heart and circulation (cardiovascular events) among patients with stable coronary artery disease supports the routine measurement of CRP as recommended in clinical practice guidelines. A systematic review uses predefined criteria to identify all the research on a given topic; a meta-analysis is a statistical method for combining the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 83 studies that investigated the association between CRP levels measured in people with coronary artery disease and subsequent cardiovascular events. Their examination of these studies revealed numerous reporting and publication short-comings. For example, none of the studies reported a prespecified statistical analysis protocol, yet analyses should be prespecified to avoid the choice of analytical method biasing the study's results. Furthermore, on average, the studies only reported seven of the 17 recommended items in the REMARK reporting guidelines, which were designed to improve the reporting quality of tumor biomarker prognostic studies. The meta-analysis revealed that patients with a CRP level in the top third of the distribution were nearly twice as likely to have a cardiovascular event as patients with a CRP in the bottom third of the distribution (a relative risk of 1.97). However, the outcomes varied considerably between studies (heterogeneity) and there was strong evidence for publication bias—most published studies were small and smaller studies were more likely to report higher relative risks. Adjustment for publication bias reduced the relative risk associated with high CRP levels to 1.19. Finally, nearly all the studies failed to calculate whether CRP measurements discriminated between patients likely and unlikely to have a subsequent cardiovascular event.
What Do These Findings Mean?
These findings suggest that, because of multiple types of reporting and publication bias, the size of the association between CRP levels and prognosis among patients with stable coronary artery disease is extremely uncertain. They also suggest that CRP measurements are unlikely to add anything to the prognostic discrimination achieved by considering blood pressure and other standard clinical factors among this patient group. Thus, the researchers suggest, the recommendation that CRP measurements should be used in the management of patients with stable coronary artery disease ought to be removed from clinical practice guidelines. More generally, these findings increase concerns about the quality of research into prognostic biomarkers and highlight areas that need to be changed, the most fundamental of which is the need to preregister studies on prognostic biomarkers and their analytic protocols.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000286.
The MedlinePlus Encyclopedia has pages on coronary artery disease and C-reactive protein (in English and Spanish)
MedlinePlus provides links to other sources of information on heart disease
The American Heart Association provides information for patients and caregivers on all aspects of cardiovascular disease, including information on the role of C-reactive protein in heart disease
Information is available from the British Heart Foundation on heart disease and keeping the heart healthy
Wikipedia has pages on biomarkers and on C-reactive protein (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The EQUATOR network is a resource center for good reporting of health research studies
doi:10.1371/journal.pmed.1000286
PMCID: PMC2879408  PMID: 20532236
22.  A systematic scoping review of adherence to reporting guidelines in health care literature 
Background
Reporting guidelines have been available for the past 17 years since the inception of the Consolidated Standards of Reporting Trials statement in 1996. These guidelines were developed to improve the quality of reporting of studies in medical literature. Despite the widespread availability of these guidelines, the quality of reporting of medical literature remained suboptimal. In this study, we assess the current adherence practice to reporting guidelines; determine key factors associated with better adherence to these guidelines; and provide recommendations to enhance adherence to reporting guidelines for future studies.
Methods
We undertook a systematic scoping review of systematic reviews of adherence to reporting guidelines across different clinical areas and study designs. We searched four electronic databases (Cumulative Index to Nursing and Allied Health Literature, Web of Science, Embase, and Medline) from January 1996 to September 2012. Studies were included if they addressed adherence to one of the following guidelines: Consolidated Standards of Reporting Trials (CONSORT), Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), Quality of Reporting of Meta-analysis (QUOROM), Transparent Reporting of Evaluations with Nonrandomized Designs (TREND), Meta-analysis Of Observational Studies in Epidemiology (MOOSE) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). A protocol for this study was devised. A literature search, data extraction, and quality assessment were performed independently by two authors in duplicate. This study reporting follows the PRISMA guidelines.
Results
Our search retrieved 5159 titles, of which 50 were eligible. Overall, 86.0% of studies reported suboptimal levels of adherence to reporting guidelines. Factors associated with better adherence included journal impact factor and endorsement of guidelines, publication date, funding source, multisite studies, pharmacological interventions and larger studies.
Conclusion
Reporting guidelines in the clinical literature are important to improve the standards of reporting of clinical studies; however, adherence to these guidelines remains suboptimal. Action is therefore needed to enhance the adherence to these standards. Strategies to enhance adherence include journal editorial policies endorsing these guidelines.
doi:10.2147/JMDH.S43952
PMCID: PMC3649856  PMID: 23671390
scoping; systematic; review; adherence; reporting; guidelines
23.  The “Child Health Evidence Week” and GRADE grid may aid transparency in the deliberative process of guideline development 
Journal of Clinical Epidemiology  2012;65(9-10):962-969.
Objective
To explore the evidence translation process during a 1-week national guideline development workshop (“Child Health Evidence Week”) in Kenya.
Study Design and Setting
Nonparticipant observational study of the discussions of a multidisciplinary guideline development panel in Kenya. Discussions were aided by GRADE (Grading of Recommendations Assessment, Development, and Evaluation) grid.
Results
Three key thematic categories emerged: 1) “referral to other evidence to support or refute the proposed recommendations;” 2) “assessment of the presented research evidence;” and 3) “assessment of the local applicability of evidence.” The types of evidence cited included research evidence and anecdotal evidence based on clinician experiences. Assessment of the research evidence revealed important challenges in the translation of evidence into recommendations, including absence of evidence, low quality or inconclusive evidence, inadequate reporting of key features of the management under consideration, and differences in panelists’ interpretation of the research literature. A broad range of factors with potential to affect local applicability of evidence were discussed.
Conclusion
The process of the “Child Health Evidence Week” combined with the GRADE grid may aid transparency in the deliberative process of guideline development, and provide a mechanism for comprehensive assessment, documentation, and reporting of multiple factors that influence the quality and applicability of guideline recommendations.
doi:10.1016/j.jclinepi.2012.03.004
PMCID: PMC3413881  PMID: 22742914
Clinical practice guidelines; Evidence; Knowledge translation; Transparency; GRADE; Pediatrics
24.  Inclusion of Ethical Issues in Dementia Guidelines: A Thematic Text Analysis 
PLoS Medicine  2013;10(8):e1001498.
Background
Clinical practice guidelines (CPGs) aim to improve professionalism in health care. However, current CPG development manuals fail to address how to include ethical issues in a systematic and transparent manner. The objective of this study was to assess the representation of ethical issues in general CPGs on dementia care.
Methods and Findings
To identify national CPGs on dementia care, five databases of guidelines were searched and national psychiatric associations were contacted in August 2011 and in June 2013. A framework for the assessment of the identified CPGs' ethical content was developed on the basis of a prior systematic review of ethical issues in dementia care. Thematic text analysis and a 4-point rating score were employed to assess how ethical issues were addressed in the identified CPGs. Twelve national CPGs were included. Thirty-one ethical issues in dementia care were identified by the prior systematic review. The proportion of these 31 ethical issues that were explicitly addressed by each CPG ranged from 22% to 77%, with a median of 49.5%. National guidelines differed substantially with respect to (a) which ethical issues were represented, (b) whether ethical recommendations were included, (c) whether justifications or citations were provided to support recommendations, and (d) to what extent the ethical issues were explained.
Conclusions
Ethical issues were inconsistently addressed in national dementia guidelines, with some guidelines including most and some including few ethical issues. Guidelines should address ethical issues and how to deal with them to help the medical profession understand how to approach care of patients with dementia, and for patients, their relatives, and the general public, all of whom might seek information and advice in national guidelines. There is a need for further research to specify how detailed ethical issues and their respective recommendations can and should be addressed in dementia guidelines.
Please see later in the article for the Editors' Summary
Editors’ Summary
Background
In the past, doctors tended to rely on their own experience to choose the best treatment for their patients. Faced with a patient with dementia (a brain disorder that affects short-term memory and the ability tocarry out normal daily activities), for example, a doctor would use his/her own experience to help decide whether the patient should remain at home or would be better cared for in a nursing home. Similarly, the doctor might have to decide whether antipsychotic drugs might be necessary to reduce behavioral or psychological symptoms such as restlessness or shouting. However, over the past two decades, numerous evidence-based clinical practice guidelines (CPGs) have been produced by governmental bodies and medical associations that aim to improve standards of clinical competence and professionalism in health care. During the development of each guideline, experts search the medical literature for the current evidence about the diagnosis and treatment of a disease, evaluate the quality of that evidence, and then make recommendations based on the best evidence available.
Why Was This Study Done?
Currently, CPG development manuals do not address how to include ethical issues in CPGs. A health-care professional is ethical if he/she behaves in accordance with the accepted principles of right and wrong that govern the medical profession. More specifically, medical professionalism is based on a set of binding ethical principles—respect for patient autonomy, beneficence, non-malfeasance (the “do no harm” principle), and justice. In particular, CPG development manuals do not address disease-specific ethical issues (DSEIs), clinical ethical situations that are relevant to the management of a specific disease. So, for example, a DSEI that arises in dementia care is the conflict between the ethical principles of non-malfeasance and patient autonomy (freedom-to-move-at-will). Thus, healthcare professionals may have to decide to physically restrain a patient with dementia to prevent the patient doing harm to him- or herself or to someone else. Given the lack of guidance on how to address ethical issues in CPG development manuals, in this thematic text analysis, the researchers assess the representation of ethical issues in CPGs on general dementia care. Thematic text analysis uses a framework for the assessment of qualitative data (information that is word-based rather than number-based) that involves pinpointing, examining, and recording patterns (themes) among the available data.
What Did the Researchers Do and Find?
The researchers identified 12 national CPGs on dementia care by searching guideline databases and by contacting national psychiatric associations. They developed a framework for the assessment of the ethical content in these CPGs based on a previous systematic review of ethical issues in dementia care. Of the 31 DSEIs included by the researchers in their analysis, the proportion that were explicitly addressed by each CPG ranged from 22% (Switzerland) to 77% (USA); on average the CPGs explicitly addressed half of the DSEIs. Four DSEIs—adequate consideration of advanced directives in decision making, usage of GPS and other monitoring techniques, covert medication, and dealing with suicidal thinking—were not addressed in at least 11 of the CPGs. The inclusion of recommendations on how to deal with DSEIs ranged from 10% of DSEIs covered in the Swiss CPG to 71% covered in the US CPG. Overall, national guidelines differed substantially with respect to which ethical issues were included, whether ethical recommendations were included, whether justifications or citations were provided to support recommendations, and to what extent the ethical issues were clearly explained.
What Do These Findings Mean?
These findings show that national CPGs on dementia care already address clinical ethical issues but that the extent to which the spectrum of DSEIs is considered varies widely within and between CPGs. They also indicate that recommendations on how to deal with DSEIs often lack the evidence that health-care professionals use to justify their clinical decisions. The researchers suggest that this situation can and should be improved, although more research is needed to determine how ethical issues and recommendations should be addressed in dementia guidelines. A more systematic and transparent inclusion of DSEIs in CPGs for dementia (and for other conditions) would further support the concept of medical professionalism as a core element of CPGs, note the researchers, but is also important for patients and their relatives who might turn to national CPGs for information and guidance at a stressful time of life.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001498.
Wikipedia contains a page on clinical practice guidelines (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US National Guideline Clearinghouse provides information on national guidelines, including CPGs for dementia
The Guidelines International Network promotes the systematic development and application of clinical practice guidelines
The American Medical Association provides information about medical ethics; the British Medical Association provides information on all aspects of ethics and includes an essential tool kit that introduces common ethical problems and practical ways to deal with them
The UK National Health Service Choices website provides information about dementia, including a personal story about dealing with dementia
MedlinePlus provides links to additional resources about dementia and about Alzheimers disease, a specific type of dementia (in English and Spanish)
The UK Nuffield Council on Bioethics provides the report Dementia: ethical issues and additional information on the public consultation on ethical issues in dementia care
doi:10.1371/journal.pmed.1001498
PMCID: PMC3742442  PMID: 23966839
25.  Genome-Wide Association Studies, Field Synopses, and the Development of the Knowledge Base on Genetic Variation and Human Diseases 
American Journal of Epidemiology  2009;170(3):269-279.
Genome-wide association studies (GWAS) have led to a rapid increase in available data on common genetic variants and phenotypes and numerous discoveries of new loci associated with susceptibility to common complex diseases. Integrating the evidence from GWAS and candidate gene studies depends on concerted efforts in data production, online publication, database development, and continuously updated data synthesis. Here the authors summarize current experience and challenges on these fronts, which were discussed at a 2008 multidisciplinary workshop sponsored by the Human Genome Epidemiology Network. Comprehensive field synopses that integrate many reported gene-disease associations have been systematically developed for several fields, including Alzheimer's disease, schizophrenia, bladder cancer, coronary heart disease, preterm birth, and DNA repair genes in various cancers. The authors summarize insights from these field synopses and discuss remaining unresolved issues—especially in the light of evidence from GWAS, for which they summarize empirical P-value and effect-size data on 223 discovered associations for binary outcomes (142 with P < 10−7). They also present a vision of collaboration that builds reliable cumulative evidence for genetic associations with common complex diseases and a transparent, distributed, authoritative knowledge base on genetic variation and human health. As a next step in the evolution of Human Genome Epidemiology reviews, the authors invite investigators to submit field synopses for possible publication in the American Journal of Epidemiology.
doi:10.1093/aje/kwp119
PMCID: PMC2714948  PMID: 19498075
association; database; encyclopedias; epidemiologic methods; genome, human; genome-wide association study; genomics; meta-analysis

Results 1-25 (1313971)