PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1314957)

Clipboard (0)
None

Related Articles

1.  A Unifying Framework for Evaluating the Predictive Power of Genetic Variants Based on the Level of Heritability Explained 
PLoS Genetics  2010;6(12):e1001230.
An increasing number of genetic variants have been identified for many complex diseases. However, it is controversial whether risk prediction based on genomic profiles will be useful clinically. Appropriate statistical measures to evaluate the performance of genetic risk prediction models are required. Previous studies have mainly focused on the use of the area under the receiver operating characteristic (ROC) curve, or AUC, to judge the predictive value of genetic tests. However, AUC has its limitations and should be complemented by other measures. In this study, we develop a novel unifying statistical framework that connects a large variety of predictive indices together. We showed that, given the overall disease probability and the level of variance in total liability (or heritability) explained by the genetic variants, we can estimate analytically a large variety of prediction metrics, for example the AUC, the mean risk difference between cases and non-cases, the net reclassification improvement (ability to reclassify people into high- and low-risk categories), the proportion of cases explained by a specific percentile of population at the highest risk, the variance of predicted risks, and the risk at any percentile. We also demonstrate how to construct graphs to visualize the performance of risk models, such as the ROC curve, the density of risks, and the predictiveness curve (disease risk plotted against risk percentile). The results from simulations match very well with our theoretical estimates. Finally we apply the methodology to nine complex diseases, evaluating the predictive power of genetic tests based on known susceptibility variants for each trait.
Author Summary
Recently many genetic variants have been established for diseases, and the findings have raised hope for risk prediction based on genomic profiles. However, we need to have proper statistical measures to assess the usefulness of such tests. In this study, we developed a statistical framework which enables us to evaluate many predictive indices analytically. It is based on the liability threshold model, which postulates a latent liability that is normally distributed. Affected individuals are assumed to have a liability exceeding a certain threshold. We demonstrated that, given the overall disease probability and variance in liability explained by the genetic markers, we can compute a variety of predictive indices. An example is the area under the receiver operating characteristic (ROC) curve, or AUC, which is very commonly employed. However, the limitations of AUC are often ignored, and we proposed complementing it with other indices. We have therefore also computed other metrics like the average difference in risks between cases and non-cases, the ability of reclassification into high- and low-risk categories, and the proportion of cases accounted for by a certain percentile of population at the highest risk. We also derived how to construct graphs showing the risk distribution in population.
doi:10.1371/journal.pgen.1001230
PMCID: PMC2996330  PMID: 21151957
2.  Assessment of Clinical Validity of a Breast Cancer Risk Model Combining Genetic and Clinical Information 
Background
The Gail model is widely used for the assessment of risk of invasive breast cancer based on recognized clinical risk factors. In recent years, a substantial number of single-nucleotide polymorphisms (SNPs) associated with breast cancer risk have been identified. However, it remains unclear how to effectively integrate clinical and genetic risk factors for risk assessment.
Methods
Seven SNPs associated with breast cancer risk were selected from the literature and genotyped in white non-Hispanic women in a nested case–control cohort of 1664 case patients and 1636 control subjects within the Women’s Health Initiative Clinical Trial. SNP risk scores were computed based on previously published odds ratios assuming a multiplicative model. Combined risk scores were calculated by multiplying Gail risk estimates by the SNP risk scores. The independence of Gail risk and SNP risk was evaluated by logistic regression. Calibration of relative risks was evaluated using the Hosmer–Lemeshow test. The performance of the combined risk scores was evaluated using receiver operating characteristic curves. The net reclassification improvement (NRI) was used to assess improvement in classification of women into low (<1.5%), intermediate (1.5%–2%), and high (>2%) categories of 5-year risk. All tests of statistical significance were two-sided.
Results
The SNP risk score was nearly independent of Gail risk. There was good agreement between predicted and observed SNP relative risks. In the analysis for receiver operating characteristic curves, the combined risk score was more discriminating, with area under the curve of 0.594 compared with area under the curve of 0.557 for Gail risk alone (P < .001). Classification also improved for 5.6% of case patients and 2.9% of control subjects, showing an NRI value of 0.085 (P = 1.0 × 10−5). Focusing on women with intermediate Gail risk resulted in an improved NRI of 0.195 (P = 8.6 × 10−5).
Conclusions
Combining validated common genetic risk factors with clinical risk factors resulted in modest improvement in classification of breast cancer risks in white non-Hispanic postmenopausal women. Classification performance was further improved by focusing on women at intermediate risk.
doi:10.1093/jnci/djq388
PMCID: PMC2970578  PMID: 20956782
3.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
4.  Inflammatory Markers and Poor Outcome after Stroke: A Prospective Cohort Study and Systematic Review of Interleukin-6 
PLoS Medicine  2009;6(9):e1000145.
In a prospective cohort study of patient outcomes following stroke, William Whiteley and colleagues find that markers of inflammatory response are associated with poor outcomes. However, addition of these markers to existing prognostic models does not improve outcome prediction.
Background
The objective of this study was to determine whether: (a) markers of acute inflammation (white cell count, glucose, interleukin-6, C-reactive protein, and fibrinogen) are associated with poor outcome after stroke and (b) the addition of markers to previously validated prognostic models improves prediction of poor outcome.
Methods and Findings
We prospectively recruited patients between 2002 and 2005. Clinicians assessed patients and drew blood for inflammatory markers. Patients were followed up by postal questionnaire for poor outcome (a score of>2 on the modified Rankin Scale) and death through the General Register Office (Scotland) at 6 mo. We performed a systematic review of the literature and meta-analysis of the association between interleukin-6 and poor outcome after stroke to place our study in the context of previous research. We recruited 844 patients; mortality data were available in 844 (100%) and functional outcome in 750 (89%). After appropriate adjustment, the odds ratios for the association of markers and poor outcome (comparing the upper and the lower third) were interleukin-6, 3.1 (95% CI: 1.9–5.0); C-reactive protein, 1.9 (95% CI: 1.2–3.1); fibrinogen, 1.5 (95% CI: 1.0–2.36); white cell count, 2.1 (95% CI: 1.3–3.4); and glucose 1.3 (95% CI: 0.8–2.1). The results for interleukin-6 were similar to other studies. However, the addition of inflammatory marker levels to validated prognostic models did not materially improve model discrimination, calibration, or reclassification for prediction of poor outcome after stroke.
Conclusions
Raised levels of markers of the acute inflammatory response after stroke are associated with poor outcomes. However, the addition of these markers to a previously validated stroke prognostic model did not improve the prediction of poor outcome. Whether inflammatory markers are useful in prediction of recurrent stroke or other vascular events is a separate question, which requires further study.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, 15 million people have a stroke. In the US alone, someone has a stroke every 40 seconds and someone dies from a stroke every 3–4 minutes. Stroke occurs when the blood supply to the brain is suddenly interrupted by a blood clot blocking a blood vessel in the brain (ischemic stroke, the commonest type of stroke) or by a blood vessel in the brain bursting (hemorrhagic stroke). Deprived of the oxygen normally carried to them by the blood, the brain cells near the blockage die. The symptoms of stroke depend on which part of the brain is damaged but include sudden weakness or paralysis along one side of the body, vision loss in one or both eyes, and confusion or trouble speaking or understanding speech. Anyone experiencing these symptoms should seek medical assistance immediately because prompt treatment can limit the damage to the brain. Risk factors for stroke include age (three-quarters of strokes occur in people over 65 years old), high blood pressure, and heart disease.
Why Was This Study Done?
Many people are left with permanent disabilities after a stroke. An accurate way to predict the likely long-term outcome (prognosis) for individual patients would help clinicians manage their patients and help relatives and patients come to terms with their changed circumstances. Clinicians can get some idea of their patients' likely outcomes by assessing six simple clinical variables. These include the ability to lift both arms and awareness of the present situation. But could the inclusion of additional variables improve the predictive power of this simple prognostic model? There is some evidence that high levels in the blood of inflammatory markers (for example, interleukin-6 and C-reactive protein) are associated with poor outcomes after stroke—inflammation is the body's response to infection and to damage. In this prospective cohort study, the researchers investigate whether inflammatory markers are associated with poor outcome after stroke and whether the addition of these markers to the six-variable prognostic model improves its predictive power. Prospective cohort studies enroll a group of participants and follow their subsequent progress.
What Did the Researchers Do and Find?
The researchers recruited 844 patients who had had a stroke (mainly mild ischemic strokes) in Edinburgh. Each patient was assessed soon after the stroke by a clinician and blood was taken for the measurement of inflammatory markers. Six months after the stroke, the patient or their relatives completed a postal questionnaire that assessed their progress. Information about patient deaths was obtained from the General Register Office for Scotland. Dependency on others for the activities of daily life or dying was recorded as a poor outcome. In their statistical analysis of these data, the researchers found that raised levels of several inflammatory markers increased the likelihood of a poor outcome. For example, after allowing for age and other factors, individuals with interleukin-6 levels in the upper third of the measured range were three times as likely to have a poor outcome as patients with interleukin-6 levels in the bottom third of the range. A systematic search of the literature revealed that previous studies that had looked at the potential association between interleukin-6 levels and outcome after stroke had found similar results. Finally, the researchers found that the addition of inflammatory marker levels to the six-variable prognostic model did not substantially improve its ability to predict outcome after stroke for this cohort of patients.
What Do These Findings Mean?
These findings provide additional support for the idea that increased levels of inflammatory markers are associated with a poor outcome after stroke. However, because patients with infections were not excluded from the study, infection may be responsible for part of the observed association. Importantly, these findings also show that although the inclusion of inflammatory markers in the six variable prognostic model slightly improves its ability to predict outcome, the magnitude of this improvement is too small to warrant the use of these markers in routine practice. Whether the measurement of inflammatory markers might be useful in the prediction of recurrent stroke—at least a quarter of people who survive a stroke will have another one within 5 years—requires further study.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000145.
This study is further discussed in a PLoS Medicine Perspective by Len Kritharides
The US National Institute of Neurological Disorders and Stroke provides information about all aspects of stroke (in English and Spanish); the Know Stroke site provides educational materials about stroke prevention, treatment, and rehabilitation (in English and Spanish)
The Internet Stroke Center provides detailed information about stroke for patients, families and health professionals (in English and Spanish)
The UK National Health Service also provides information for patients and their families about stroke (in several languages)
MedlinePlus provides links to further resources and advice about stroke (in English and Spanish)
The six simple variable model for prediction of death or disability after stroke is available here: http://dcnapp1.dcn.ed.ac.uk/scope/
doi:10.1371/journal.pmed.1000145
PMCID: PMC2730573  PMID: 19901973
5.  Potential Impact of Adding Genetic Markers to Clinical Parameters in Predicting Prostate Biopsy Outcomes in Men Following an Initial Negative Biopsy: Findings from the REDUCE Trial 
European urology  2012;62(6):953-961.
Background
Several germline single nucleotide polymorphisms (SNPs) have been consistently associated with prostate cancer (PCa) risk.
Objective
To determine whether there is an improvement in PCa risk prediction by adding these SNPs to existing predictors of PCa.
Design, setting, and participants
Subjects included men in the placebo arm of the randomized Reduction by Dutasteride of Prostate Cancer Events (REDUCE) trial in whom germline DNA was available. All men had an initial negative prostate biopsy and underwent study-mandated biopsies at 2 yr and 4 yr. Predictive performance of baseline clinical parameters and/or a genetic score based on 33 established PCa risk-associated SNPs was evaluated.
Outcome measurements and statistical analysis
Area under the receiver operating characteristic curves (AUC) were used to compare different models with different predictors. Net reclassification improvement (NRI) and decision curve analysis (DCA) were used to assess changes in risk prediction by adding genetic markers.
Results and limitations
Among 1654 men, genetic score was a significant predictor of positive biopsy, even after adjusting for known clinical variables and family history (p = 3.41 × 10−8). The AUC for the genetic score exceeded that of any other PCa predictor at 0.59. Adding the genetic score to the best clinical model improved the AUC from 0.62 to 0.66 (p < 0.001), reclassified PCa risk in 33% of men (NRI: 0.10; p = 0.002), resulted in higher net benefit from DCA, and decreased the number of biopsies needed to detect the same number of PCa instances. The benefit of adding the genetic score was greatest among men at intermediate risk (25th percentile to 75th percentile). Similar results were found for high-grade (Gleason score ≥7) PCa. A major limitation of this study was its focus on white patients only.
Conclusions
Adding genetic markers to current clinical parameters may improve PCa risk prediction. The improvement is modest but may be helpful for better determining the need for repeat prostate biopsy. The clinical impact of these results requires further study.
doi:10.1016/j.eururo.2012.05.006
PMCID: PMC3568765  PMID: 22652152
Prostate cancer; Genetics; AUC; Detection rate; Reclassification; SNPs; Prospective study; Clinical trial
6.  Lipoprotein Metabolism Indicators Improve Cardiovascular Risk Prediction 
PLoS ONE  2014;9(3):e92840.
Background
Cardiovascular disease risk increases when lipoprotein metabolism is dysfunctional. We have developed a computational model able to derive indicators of lipoprotein production, lipolysis, and uptake processes from a single lipoprotein profile measurement. This is the first study to investigate whether lipoprotein metabolism indicators can improve cardiovascular risk prediction and therapy management.
Methods and Results
We calculated lipoprotein metabolism indicators for 1981 subjects (145 cases, 1836 controls) from the Framingham Heart Study offspring cohort in which NMR lipoprotein profiles were measured. We applied a statistical learning algorithm using a support vector machine to select conventional risk factors and lipoprotein metabolism indicators that contributed to predicting risk for general cardiovascular disease. Risk prediction was quantified by the change in the Area-Under-the-ROC-Curve (ΔAUC) and by risk reclassification (Net Reclassification Improvement (NRI) and Integrated Discrimination Improvement (IDI)). Two VLDL lipoprotein metabolism indicators (VLDLE and VLDLH) improved cardiovascular risk prediction. We added these indicators to a multivariate model with the best performing conventional risk markers. Our method significantly improved both CVD prediction and risk reclassification.
Conclusions
Two calculated VLDL metabolism indicators significantly improved cardiovascular risk prediction. These indicators may help to reduce prescription of unnecessary cholesterol-lowering medication, reducing costs and possible side-effects. For clinical application, further validation is required.
doi:10.1371/journal.pone.0092840
PMCID: PMC3965475  PMID: 24667559
7.  Three New Genetic Loci (R1210C in CFH, Variants in COL8A1 and RAD51B) Are Independently Related to Progression to Advanced Macular Degeneration 
PLoS ONE  2014;9(1):e87047.
Objectives
To assess the independent impact of new genetic variants on conversion to advanced stages of AMD, controlling for established risk factors, and to determine the contribution of genes in predictive models.
Methods
In this prospective longitudinal study of 2765 individuals, 777 subjects progressed to neovascular disease (NV) or geographic atrophy (GA) in either eye over 12 years. Recently reported genetic loci were assessed for their independent effects on incident advanced AMD after controlling for 6 established loci in 5 genes, and demographic, behavioral, and macular characteristics. New variants which remained significantly related to progression were then added to a final multivariate model to assess their independent effects. The contribution of genes to risk models was assessed using reclassification tables by determining risk within cross-classified quintiles for alternative models.
Results
Three new genetic variants were significantly related to progression: rare variant R1210C in CFH (hazard ratio (HR) 2.5, 95% confidence interval [CI] 1.2–5.3, P = 0.01), and common variants in genes COL8A1 (HR 2.0, 95% CI 1.1–3.5, P = 0.02) and RAD51B (HR 0.8, 95% CI 0.60–0.97, P = 0.03). The area under the curve statistic (AUC) was significantly higher for the 9 gene model (.884) vs the 0 gene model (.873), P = .01. AUC’s for the 9 vs 6 gene models were not significantly different, but reclassification analyses indicated significant added information for more genes, with adjusted odds ratios (OR) for progression within 5 years per one quintile increase in risk score of 2.7, P<0.001 for the 9 vs 6 loci model, and OR 3.5, P<0.001 for the 9 vs. 0 gene model. Similar results were seen for NV and GA.
Conclusions
Rare variant CFH R1210C and common variants in COL8A1 and RAD51B plus six genes in previous models contribute additional predictive information for advanced AMD beyond macular and behavioral phenotypes.
doi:10.1371/journal.pone.0087047
PMCID: PMC3909074  PMID: 24498017
8.  Predictive Value of Updating Framingham Risk Scores with Novel Risk Markers in the U.S. General Population 
PLoS ONE  2014;9(2):e88312.
Background
According to population-based cohort studies CT coronary calcium score (CTCS), carotid intima-media thickness (cIMT), high-sensitivity C- reactive protein (CRP), and ankle-brachial index (ABI) are promising novel risk markers for improving cardiovascular risk assessment. Their impact in the U.S. general population is however uncertain. Our aim was to estimate the predictive value of four novel cardiovascular risk markers for the U.S. general population.
Methods and Findings
Risk profiles, CRP and ABI data of 3,736 asymptomatic subjects aged 40 or older from the National Health and Nutrition Examination Survey (NHANES) 2003–2004 exam were used along with predicted CTCS and cIMT values. For each subject, we calculated 10-year cardiovascular risks with and without each risk marker. Event rates adjusted for competing risks were obtained by microsimulation. We assessed the impact of updated 10-year risk scores by reclassification and C-statistics. In the study population (mean age 56±11 years, 48% male), 70% (80%) were at low (<10%), 19% (14%) at intermediate (≥10–<20%), and 11% (6%) at high (≥20%) 10-year CVD (CHD) risk. Net reclassification improvement was highest after updating 10-year CVD risk with CTCS: 0.10 (95%CI 0.02–0.19). The C-statistic for 10-year CVD risk increased from 0.82 by 0.02 (95%CI 0.01–0.03) with CTCS. Reclassification occurred most often in those at intermediate risk: with CTCS, 36% (38%) moved to low and 22% (30%) to high CVD (CHD) risk. Improvements with other novel risk markers were limited.
Conclusions
Only CTCS appeared to have significant incremental predictive value in the U.S. general population, especially in those at intermediate risk. In future research, cost-effectiveness analyses should be considered for evaluating novel cardiovascular risk assessment strategies.
doi:10.1371/journal.pone.0088312
PMCID: PMC3928195  PMID: 24558385
9.  Are Markers of Inflammation More Strongly Associated with Risk for Fatal Than for Nonfatal Vascular Events? 
PLoS Medicine  2009;6(6):e1000099.
In a secondary analysis of a randomized trial comparing pravastatin versus placebo for the prevention of coronary and cerebral events in an elderly at-risk population, Naveed Sattar and colleagues find that inflammatory markers may be more strongly associated with risk of fatal vascular events than nonfatal vascular events.
Background
Circulating inflammatory markers may more strongly relate to risk of fatal versus nonfatal cardiovascular disease (CVD) events, but robust prospective evidence is lacking. We tested whether interleukin (IL)-6, C-reactive protein (CRP), and fibrinogen more strongly associate with fatal compared to nonfatal myocardial infarction (MI) and stroke.
Methods and Findings
In the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER), baseline inflammatory markers in up to 5,680 men and women aged 70–82 y were related to risk for endpoints; nonfatal CVD (i.e., nonfatal MI and nonfatal stroke [n = 672]), fatal CVD (n = 190), death from other CV causes (n = 38), and non-CVD mortality (n = 300), over 3.2-y follow-up. Elevations in baseline IL-6 levels were significantly (p = 0.0009; competing risks model analysis) more strongly associated with fatal CVD (hazard ratio [HR] for 1 log unit increase in IL-6 1.75, 95% confidence interval [CI] 1.44–2.12) than with risk of nonfatal CVD (1.17, 95% CI 1.04–1.31), in analyses adjusted for treatment allocation. The findings were consistent in a fully adjusted model. These broad trends were similar for CRP and, to a lesser extent, for fibrinogen. The results were also similar in placebo and statin recipients (i.e., no interaction). The C-statistic for fatal CVD using traditional risk factors was significantly (+0.017; p<0.0001) improved by inclusion of IL-6 but not so for nonfatal CVD events (p = 0.20).
Conclusions
In PROSPER, inflammatory markers, in particular IL-6 and CRP, are more strongly associated with risk of fatal vascular events than nonfatal vascular events. These novel observations may have important implications for better understanding aetiology of CVD mortality, and have potential clinical relevance.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Cardiovascular disease (CVD)—disease that affects the heart and/or the blood vessels—is a common cause of death in developed countries. In the USA, for example, the leading cause of death is coronary heart disease (CHD), a CVD in which narrowing of the heart's blood vessels by “atherosclerotic plaques” (fatty deposits that build up with age) slows the blood supply to the heart and may eventually cause a heart attack (myocardial infarction). Other types of CVD include stroke (in which atherosclerotic plaques interrupt the brain's blood supply) and heart failure (a condition in which the heart cannot pump enough blood to the rest of the body). Smoking, high blood pressure, high blood levels of cholesterol (a type of fat), having diabetes, and being overweight all increase a person's risk of developing CVD. Tools such as the “Framingham risk calculator” take these and other risk factors into account to assess an individual's overall risk of CVD, which can be reduced by taking drugs to reduce blood pressure or cholesterol levels (for example, pravastatin) and by making lifestyle changes.
Why Was This Study Done?
Inflammation (an immune response to injury) in the walls of blood vessels is thought to play a role in the development of atherosclerotic plaques. Consistent with this idea, several epidemiological studies (investigations of the causes and distribution of disease in populations) have shown that people with high circulating levels of markers of inflammation such as interleukin-6 (IL-6), C-reactive protein (CRP), and fibrinogen are more likely to have a stroke or a heart attack (a CVD event) than people with low levels of these markers. Although these studies have generally lumped together fatal and nonfatal CVD events, some evidence suggests that circulating inflammatory markers may be more strongly associated with fatal than with nonfatal CVD events. If this is the case, the mechanisms that lead to fatal and nonfatal CVD events may be subtly different and knowing about these differences could improve both the prevention and treatment of CVD. In this study, the researchers investigate this possibility using data collected in the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER; a trial that examined pravastatin's effect on CVD development among 70–82 year olds with pre-existing CVD or an increased risk of CVD because of smoking, high blood pressure, or diabetes).
What Did the Researchers Do and Find?
The researchers used several statistical models to examine the association between baseline levels of IL-6, CRP, and fibrinogen in the trial participants and nonfatal CVD events (nonfatal heart attacks and nonfatal strokes), fatal CVD events, death from other types of CVD, and deaths from other causes during 3.2 years of follow-up. Increased levels of all three inflammatory markers were more strongly associated with fatal CVD than with nonfatal CVD after adjustment for treatment allocation and for other established CVD risk factors but this pattern was strongest for IL-6. Thus, a unit increase in the log of IL-6 levels increased the risk of fatal CVD by half but increased the risk of nonfatal CVD by significantly less. The researchers also investigated whether including these inflammatory markers in tools designed to predict an individual's CVD risk could improve the tool's ability to distinguish between individuals with a high and low risk. The addition of IL-6 to established risk factors, they report, increased this discriminatory ability for fatal CVD but not for nonfatal CVD.
What Do These Findings Mean?
These findings indicate that, at least for the elderly at-risk patients who were included in PROSPER, inflammatory markers are more strongly associated with the risk of a fatal heart attack or stroke than with nonfatal CVD events. These findings need to be confirmed in younger populations and larger studies also need to be done to discover whether the same association holds when fatal heart attacks and fatal strokes are considered separately. Nevertheless, the present findings suggest that inflammation may specifically help to promote the development of serious, potentially fatal CVD and should stimulate improved research into the use of inflammation markers to predict risk of deaths from CVD.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000099.
The MedlinePlus Encyclopedia has pages on coronary heart disease, stroke, and atherosclerosis (in English and Spanish)
MedlinePlus provides links to many other sources of information on heart diseases, vascular diseases, and stroke (in English and Spanish)
Information for patients and caregivers is provided by the American Heart Association on all aspects of cardiovascular disease, including information on inflammation and heart disease
Information is available from the British Heart Foundation on heart disease and keeping the heart healthy
More information about PROSPER is available on the Web site of the Vascular Biochemistry Department of the University of Glasgow
doi:10.1371/journal.pmed.1000099
PMCID: PMC2694359  PMID: 19554082
10.  Utility of genetic and non-genetic risk factors in prediction of type 2 diabetes: Whitehall II prospective cohort study 
Objectives To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes.
Design Workplace based prospective cohort study with three 5 yearly medical screenings.
Participants 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years.
Outcome measures Non-genetic variables included in two established risk models—the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)—and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor’s diagnosis, or the use of anti-diabetic drugs.
Results A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score.
Conclusion The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.
doi:10.1136/bmj.b4838
PMCID: PMC2806945  PMID: 20075150
11.  Biomarker Profiling by Nuclear Magnetic Resonance Spectroscopy for the Prediction of All-Cause Mortality: An Observational Study of 17,345 Persons 
PLoS Medicine  2014;11(2):e1001606.
In this study, Würtz and colleagues conducted high-throughput profiling of blood specimens in two large population-based cohorts in order to identify biomarkers for all-cause mortality and enhance risk prediction. The authors found that biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. However, further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers to guide screening and prevention.
Please see later in the article for the Editors' Summary
Background
Early identification of ambulatory persons at high short-term risk of death could benefit targeted prevention. To identify biomarkers for all-cause mortality and enhance risk prediction, we conducted high-throughput profiling of blood specimens in two large population-based cohorts.
Methods and Findings
106 candidate biomarkers were quantified by nuclear magnetic resonance spectroscopy of non-fasting plasma samples from a random subset of the Estonian Biobank (n = 9,842; age range 18–103 y; 508 deaths during a median of 5.4 y of follow-up). Biomarkers for all-cause mortality were examined using stepwise proportional hazards models. Significant biomarkers were validated and incremental predictive utility assessed in a population-based cohort from Finland (n = 7,503; 176 deaths during 5 y of follow-up). Four circulating biomarkers predicted the risk of all-cause mortality among participants from the Estonian Biobank after adjusting for conventional risk factors: alpha-1-acid glycoprotein (hazard ratio [HR] 1.67 per 1–standard deviation increment, 95% CI 1.53–1.82, p = 5×10−31), albumin (HR 0.70, 95% CI 0.65–0.76, p = 2×10−18), very-low-density lipoprotein particle size (HR 0.69, 95% CI 0.62–0.77, p = 3×10−12), and citrate (HR 1.33, 95% CI 1.21–1.45, p = 5×10−10). All four biomarkers were predictive of cardiovascular mortality, as well as death from cancer and other nonvascular diseases. One in five participants in the Estonian Biobank cohort with a biomarker summary score within the highest percentile died during the first year of follow-up, indicating prominent systemic reflections of frailty. The biomarker associations all replicated in the Finnish validation cohort. Including the four biomarkers in a risk prediction score improved risk assessment for 5-y mortality (increase in C-statistics 0.031, p = 0.01; continuous reclassification improvement 26.3%, p = 0.001).
Conclusions
Biomarker associations with cardiovascular, nonvascular, and cancer mortality suggest novel systemic connectivities across seemingly disparate morbidities. The biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. Further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers for guiding screening and prevention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A biomarker is a biological molecule found in blood, body fluids, or tissues that may signal an abnormal process, a condition, or a disease. The level of a particular biomarker may indicate a patient's risk of disease, or likely response to a treatment. For example, cholesterol levels are measured to assess the risk of heart disease. Most current biomarkers are used to test an individual's risk of developing a specific condition. There are none that accurately assess whether a person is at risk of ill health generally, or likely to die soon from a disease. Early and accurate identification of people who appear healthy but in fact have an underlying serious illness would provide valuable opportunities for preventative treatment.
While most tests measure the levels of a specific biomarker, there are some technologies that allow blood samples to be screened for a wide range of biomarkers. These include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry. These tools have the potential to be used to screen the general population for a range of different biomarkers.
Why Was This Study Done?
Identifying new biomarkers that provide insight into the risk of death from all causes could be an important step in linking different diseases and assessing patient risk. The authors in this study screened patient samples using NMR spectroscopy for biomarkers that accurately predict the risk of death particularly amongst the general population, rather than amongst people already known to be ill.
What Did the Researchers Do and Find?
The researchers studied two large groups of people, one in Estonia and one in Finland. Both countries have set up health registries that collect and store blood samples and health records over many years. The registries include large numbers of people who are representative of the wider population.
The researchers first tested blood samples from a representative subset of the Estonian group, testing 9,842 samples in total. They looked at 106 different biomarkers in each sample using NMR spectroscopy. They also looked at the health records of this group and found that 508 people died during the follow-up period after the blood sample was taken, the majority from heart disease, cancer, and other diseases. Using statistical analysis, they looked for any links between the levels of different biomarkers in the blood and people's short-term risk of dying. They found that the levels of four biomarkers—plasma albumin, alpha-1-acid glycoprotein, very-low-density lipoprotein (VLDL) particle size, and citrate—appeared to accurately predict short-term risk of death. They repeated this study with the Finnish group, this time with 7,503 individuals (176 of whom died during the five-year follow-up period after giving a blood sample) and found similar results.
The researchers carried out further statistical analyses to take into account other known factors that might have contributed to the risk of life-threatening illness. These included factors such as age, weight, tobacco and alcohol use, cholesterol levels, and pre-existing illness, such as diabetes and cancer. The association between the four biomarkers and short-term risk of death remained the same even when controlling for these other factors.
The analysis also showed that combining the test results for all four biomarkers, to produce a biomarker score, provided a more accurate measure of risk than any of the biomarkers individually. This biomarker score also proved to be the strongest predictor of short-term risk of dying in the Estonian group. Individuals with a biomarker score in the top 20% had a risk of dying within five years that was 19 times greater than that of individuals with a score in the bottom 20% (288 versus 15 deaths).
What Do These Findings Mean?
This study suggests that there are four biomarkers in the blood—alpha-1-acid glycoprotein, albumin, VLDL particle size, and citrate—that can be measured by NMR spectroscopy to assess whether otherwise healthy people are at short-term risk of dying from heart disease, cancer, and other illnesses. However, further validation of these findings is still required, and additional studies should examine the biomarker specificity and associations in settings closer to clinical practice. The combined biomarker score appears to be a more accurate predictor of risk than tests for more commonly known risk factors. Identifying individuals who are at high risk using these biomarkers might help to target preventative medical treatments to those with the greatest need.
However, there are several limitations to this study. As an observational study, it provides evidence of only a correlation between a biomarker score and ill health. It does not identify any underlying causes. Other factors, not detectable by NMR spectroscopy, might be the true cause of serious health problems and would provide a more accurate assessment of risk. Nor does this study identify what kinds of treatment might prove successful in reducing the risks. Therefore, more research is needed to determine whether testing for these biomarkers would provide any clinical benefit.
There were also some technical limitations to the study. NMR spectroscopy does not detect as many biomarkers as mass spectrometry, which might therefore identify further biomarkers for a more accurate risk assessment. In addition, because both study groups were northern European, it is not yet known whether the results would be the same in other ethnic groups or populations with different lifestyles.
In spite of these limitations, the fact that the same four biomarkers are associated with a short-term risk of death from a variety of diseases does suggest that similar underlying mechanisms are taking place. This observation points to some potentially valuable areas of research to understand precisely what's contributing to the increased risk.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001606
The US National Institute of Environmental Health Sciences has information on biomarkers
The US Food and Drug Administration has a Biomarker Qualification Program to help researchers in identifying and evaluating new biomarkers
Further information on the Estonian Biobank is available
The Computational Medicine Research Team of the University of Oulu and the University of Bristol have a webpage that provides further information on high-throughput biomarker profiling by NMR spectroscopy
doi:10.1371/journal.pmed.1001606
PMCID: PMC3934819  PMID: 24586121
12.  Analysis of Biomarker Data: logs, odds ratios and ROC curves 
Current opinion in HIV and AIDS  2010;5(6):473-479.
Purpose of review
We discuss two data analysis issues for studies that use binary clinical outcomes (whether or not an event occurred): the choice of an appropriate scale and transformation when biomarkers are evaluated as explanatory factors in logistic regression; and assessing the ability of biomarkers to improve prediction accuracy for event risk.
Recent findings
Biomarkers with skewed distributions should be transformed before they are included as continuous covariates in logistic regression models. The utility of new biomarkers may be assessed by measuring the improvement in predicting event risk after adding the biomarkers to an existing model. The area under the receiver operating characteristic (ROC) curve (C-statistic) is often cited; it was developed for a different purpose, however, and may not address the clinically relevant questions. Measures of risk reclassification and risk prediction accuracy may be more appropriate.
Summary
The appropriate analysis of biomarkers depends on the research question. Odds ratios obtained from logistic regression describe associations of biomarkers with clinical events; failure to accurately transform the markers, however, may result in misleading estimates. Whilst the C-statistic is often used to assess the ability of new biomarkers to improve the prediction of event risk, other measures may be more suitable.
doi:10.1097/COH.0b013e32833ed742
PMCID: PMC3157029  PMID: 20978390
biomarker analysis; odds ratio; ROC curve; risk prediction accuracy; C-statistic
13.  Ankle Brachial Index Combined with Framingham Risk Score to Predict Cardiovascular Events and Mortality: A Meta-analysis 
Context
Prediction models to identify healthy individuals at high risk of cardiovascular disease have limited accuracy. A low ankle brachial index is an indicator of atherosclerosis and has the potential to improve prediction.
Objective
To determine if the ankle brachial index provides information on the risk of cardiovascular events and mortality independently of the Framingham Risk Score and can improve risk prediction.
Data Sources
Relevant studies were identified by collaborators. A search of MEDLINE (1950 to February 2008) and EMBASE (1980 to February 2008), was conducted using common text words for the term ‘ABI’ combined with text words and Medical Subject Headings to capture prospective cohort designs. Review of reference lists and conference proceedings, and correspondence with experts was conducted to identify additional published and unpublished studies.
Study Selection
Studies were included if (1) participants were derived from a general population (2) ankle brachial index was measured at baseline and (3) subjects were followed up to detect total and cardiovascular mortality.
Data Extraction
Pre-specified data on subjects in each selected study were extracted into a combined dataset and an individual participant data meta-analysis conducted on subjects who had no previous history of coronary heart disease.
Results
Sixteen population cohort studies fulfilling the inclusion criteria were included. During 480,325 person years of follow up of 24,955 men and 23,339 women, the risk of death by ankle brachial index had a reverse J shaped distribution with a normal (low risk) ankle brachial index of 1.11 to 1.40. The 10-year cardiovascular mortality (95% CI) in men with a low ankle brachial index (≤ 0.90) was 18.7% (13.3% to 24.1%) and with normal ankle brachial index (1.11 to 1.40) was 4.4% (3.2% to 5.7%), hazard ratio (95% CI) 4.2 (3.5 to 5.4). Corresponding mortalities in women were 12.6% (6.2% to 19.0%) and 4.1% (2.2% to 6.1%), hazard ratio 3.5 (2.4 to 5.1). The hazard ratios remained elevated on adjusting for Framingham Risk Score, 2.9 (2.3 to 3.7) for men and 3.0 (2.0 to 4.4) for women. A low ankle brachial index (≤0.90) was associated with approximately twice the 10-year total mortality, cardiovascular mortality and major coronary event rate compared with the overall rate in each Framingham category. Inclusion of the ankle brachial index in cardiovascular risk stratification using the Framingham Risk Score would result in reclassification of the risk category and modification of treatment recommendations in approximately 19% of men and 36% of women.
Conclusion
Measurement of the ankle brachial index may improve the accuracy of cardiovascular risk prediction beyond the Framingham Risk Score. Development and validation of a new risk equation incorporating the ankle brachial index is warranted.
doi:10.1001/jama.300.2.197
PMCID: PMC2932628  PMID: 18612117
14.  Gene-Lifestyle Interaction and Type 2 Diabetes: The EPIC InterAct Case-Cohort Study 
PLoS Medicine  2014;11(5):e1001647.
In this study, Wareham and colleagues quantified the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention. The authors found that the relative effect of a type 2 diabetes genetic risk score is greater in younger and leaner participants, and the high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Please see later in the article for the Editors' Summary
Background
Understanding of the genetic basis of type 2 diabetes (T2D) has progressed rapidly, but the interactions between common genetic variants and lifestyle risk factors have not been systematically investigated in studies with adequate statistical power. Therefore, we aimed to quantify the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention.
Methods and Findings
The InterAct study includes 12,403 incident T2D cases and a representative sub-cohort of 16,154 individuals from a cohort of 340,234 European participants with 3.99 million person-years of follow-up. We studied the combined effects of an additive genetic T2D risk score and modifiable and non-modifiable risk factors using Prentice-weighted Cox regression and random effects meta-analysis methods. The effect of the genetic score was significantly greater in younger individuals (p for interaction  = 1.20×10−4). Relative genetic risk (per standard deviation [4.4 risk alleles]) was also larger in participants who were leaner, both in terms of body mass index (p for interaction  = 1.50×10−3) and waist circumference (p for interaction  = 7.49×10−9). Examination of absolute risks by strata showed the importance of obesity for T2D risk. The 10-y cumulative incidence of T2D rose from 0.25% to 0.89% across extreme quartiles of the genetic score in normal weight individuals, compared to 4.22% to 7.99% in obese individuals. We detected no significant interactions between the genetic score and sex, diabetes family history, physical activity, or dietary habits assessed by a Mediterranean diet score.
Conclusions
The relative effect of a T2D genetic risk score is greater in younger and leaner participants. However, this sub-group is at low absolute risk and would not be a logical target for preventive interventions. The high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, more than 380 million people currently have diabetes, and the condition is becoming increasingly common. Diabetes is characterized by high levels of glucose (sugar) in the blood. Blood sugar levels are usually controlled by insulin, a hormone released by the pancreas after meals (digestion of food produces glucose). In people with type 2 diabetes (the commonest type of diabetes), blood sugar control fails because the fat and muscle cells that normally respond to insulin by removing excess sugar from the blood become less responsive to insulin. Type 2 diabetes can often initially be controlled with diet and exercise (lifestyle changes) and with antidiabetic drugs such as metformin and sulfonylureas, but patients may eventually need insulin injections to control their blood sugar levels. Long-term complications of diabetes, which include an increased risk of heart disease and stroke, reduce the life expectancy of people with diabetes by about ten years compared to people without diabetes.
Why Was This Study Done?
Type 2 diabetes is thought to originate from the interplay between genetic and lifestyle factors. But although rapid progress is being made in understanding the genetic basis of type 2 diabetes, it is not known whether the consequences of adverse lifestyles (for example, being overweight and/or physically inactive) differ according to an individual's underlying genetic risk of diabetes. It is important to investigate this question to inform strategies for prevention. If, for example, obese individuals with a high level of genetic risk have a higher risk of developing diabetes than obese individuals with a low level of genetic risk, then preventative strategies that target lifestyle interventions to obese individuals with a high genetic risk would be more effective than strategies that target all obese individuals. In this case-cohort study, researchers from the InterAct consortium quantify the combined effects of genetic and lifestyle factors on the risk of type 2 diabetes. A case-cohort study measures exposure to potential risk factors in a group (cohort) of people and compares the occurrence of these risk factors in people who later develop the disease with those who remain disease free.
What Did the Researchers Do and Find?
The InterAct study involves 12,403 middle-aged individuals who developed type 2 diabetes after enrollment (incident cases) into the European Prospective Investigation into Cancer and Nutrition (EPIC) and a sub-cohort of 16,154 EPIC participants. The researchers calculated a genetic type 2 diabetes risk score for most of these individuals by determining which of 49 gene variants associated with type 2 diabetes each person carried, and collected baseline information about exposure to lifestyle risk factors for type 2 diabetes. They then used various statistical approaches to examine the combined effects of the genetic risk score and lifestyle factors on diabetes development. The effect of the genetic score was greater in younger individuals than in older individuals and greater in leaner participants than in participants with larger amounts of body fat. The absolute risk of type 2 diabetes, expressed as the ten-year cumulative incidence of type 2 diabetes (the percentage of participants who developed diabetes over a ten-year period) increased with increasing genetic score in normal weight individuals from 0.25% in people with the lowest genetic risk scores to 0.89% in those with the highest scores; in obese people, the ten-year cumulative incidence rose from 4.22% to 7.99% with increasing genetic risk score.
What Do These Findings Mean?
These findings show that in this middle-aged cohort, the relative association with type 2 diabetes of a genetic risk score comprised of a large number of gene variants is greatest in individuals who are younger and leaner at baseline. This finding may in part reflect the methods used to originally identify gene variants associated with type 2 diabetes, and future investigations that include other genetic variants, other lifestyle factors, and individuals living in other settings should be undertaken to confirm this finding. Importantly, however, this study shows that young, lean individuals with a high genetic risk score have a low absolute risk of developing type 2 diabetes. Thus, this sub-group of individuals is not a logical target for preventative interventions. Rather, suggest the researchers, the high absolute risk of type 2 diabetes associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001647.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health-care professionals and the general public, including detailed information on diabetes prevention (in English and Spanish)
The UK National Health Service Choices website provides information for patients and carers about type 2 diabetes and about living with diabetes; it also provides people's stories about diabetes
The charity Diabetes UK provides detailed information for patients and carers in several languages, including information on healthy lifestyles for people with diabetes
The UK-based non-profit organization Healthtalkonline has interviews with people about their experiences of diabetes
The Genetic Landscape of Diabetes is published by the US National Center for Biotechnology Information
More information on the InterAct study is available
MedlinePlus provides links to further resources and advice about diabetes and diabetes prevention (in English and Spanish)
doi:10.1371/journal.pmed.1001647
PMCID: PMC4028183  PMID: 24845081
15.  Statistical methods for assessment of added usefulness of new biomarkers 
The discovery and development of new biomarkers continues to be an exciting and promising field. Improvement of prediction of risk of developing disease is one of the key motivations in these pursuits. Appropriate statistical measures are necessary for drawing meaningful conclusions about the clinical usefulness of these new markers. In this review, we present several novel metrics proposed to serve this purpose. We use reclassification tables constructed based on clinically meaningful disease risk categories to discuss the concepts of calibration, risk separation, risk discrimination, and risk classification accuracy. We discuss the notion that the net reclassification improvement is a simple yet informative way to summarize information contained in risk reclassification tables. In the absence of meaningful risk categories, we suggest a ‘category-less’ version of the net reclassification improvement and integrated discrimination improvement as metrics to summarize the incremental value of new biomarkers. We also suggest that predictiveness curves be preferred to receiver-operating-characteristic curves as visual descriptors of a statistical model’s ability to separate predicted probabilities of disease events. Reporting of standard metrics, including measures of relative risk and the c statistic is still recommended. These concepts are illustrated with a risk prediction example using data from the Framingham Heart Study.
doi:10.1515/CCLM.2010.340
PMCID: PMC3155999  PMID: 20716010
reclassification; risk prediction; NRI; IDI; calibration; discrimination
16.  Circulating Mitochondrial DNA in Patients in the ICU as a Marker of Mortality: Derivation and Validation 
PLoS Medicine  2013;10(12):e1001577.
In this paper, Choi and colleagues analyzed levels of mitochondrial DNA in two prospective observational cohort studies and found that increased mtDNA levels are associated with ICU mortality, and improve risk prediction in medical ICU patients. The data suggests that mtDNA could serve as a viable plasma biomarker in MICU patients.
Background
Mitochondrial DNA (mtDNA) is a critical activator of inflammation and the innate immune system. However, mtDNA level has not been tested for its role as a biomarker in the intensive care unit (ICU). We hypothesized that circulating cell-free mtDNA levels would be associated with mortality and improve risk prediction in ICU patients.
Methods and Findings
Analyses of mtDNA levels were performed on blood samples obtained from two prospective observational cohort studies of ICU patients (the Brigham and Women's Hospital Registry of Critical Illness [BWH RoCI, n = 200] and Molecular Epidemiology of Acute Respiratory Distress Syndrome [ME ARDS, n = 243]). mtDNA levels in plasma were assessed by measuring the copy number of the NADH dehydrogenase 1 gene using quantitative real-time PCR. Medical ICU patients with an elevated mtDNA level (≥3,200 copies/µl plasma) had increased odds of dying within 28 d of ICU admission in both the BWH RoCI (odds ratio [OR] 7.5, 95% CI 3.6–15.8, p = 1×10−7) and ME ARDS (OR 8.4, 95% CI 2.9–24.2, p = 9×10−5) cohorts, while no evidence for association was noted in non-medical ICU patients. The addition of an elevated mtDNA level improved the net reclassification index (NRI) of 28-d mortality among medical ICU patients when added to clinical models in both the BWH RoCI (NRI 79%, standard error 14%, p<1×10−4) and ME ARDS (NRI 55%, standard error 20%, p = 0.007) cohorts. In the BWH RoCI cohort, those with an elevated mtDNA level had an increased risk of death, even in analyses limited to patients with sepsis or acute respiratory distress syndrome. Study limitations include the lack of data elucidating the concise pathological roles of mtDNA in the patients, and the limited numbers of measurements for some of biomarkers.
Conclusions
Increased mtDNA levels are associated with ICU mortality, and inclusion of mtDNA level improves risk prediction in medical ICU patients. Our data suggest that mtDNA could serve as a viable plasma biomarker in medical ICU patients.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Intensive care units (ICUs, also known as critical care units) are specialist hospital wards that provide care for people with life-threatening injuries and illnesses. In the US alone, more than 5 million people are admitted to ICUs every year. Different types of ICUs treat different types of problems. Medical ICUs treat patients who, for example, have been poisoned or who have a serious infection such as sepsis (blood poisoning) or severe pneumonia (inflammation of the lungs); trauma ICUs treat patients who have sustained a major injury; cardiac ICUs treat patients who have heart problems; and surgical ICUs treat complications arising from operations. Patients admitted to ICUs require constant medical attention and support from a team of specially trained nurses and physicians to prevent organ injury and to keep their bodies functioning. Monitors, intravenous tubes (to supply essential fluids, nutrients, and drugs), breathing machines, catheters (to drain urine), and other equipment also help to keep ICU patients alive.
Why Was This Study Done?
Although many patients admitted to ICUs recover, others do not. ICU specialists use scoring systems (algorithms) based on clinical signs and physiological measurements to predict their patients' likely outcomes. For example, the APACHE II scoring system uses information on heart and breathing rates, temperature, levels of salts in the blood, and other signs and physiological measurements collected during the first 24 hours in the ICU to predict the patient's risk of death. Existing scoring systems are not perfect, however, and “biomarkers” (molecules in bodily fluids that provide information about a disease state) are needed to improve risk prediction for ICU patients. Here, the researchers investigate whether levels of circulating cell-free mitochondrial DNA (mtDNA) are associated with ICU deaths and whether these levels can be used as a biomarker to improve risk prediction in ICU patients. Mitochondria are cellular structures that produce energy. Levels of mtDNA in the plasma (the liquid part of blood) increase in response to trauma and infection. Moreover, mtDNA activates molecular processes that lead to inflammation and organ injury.
What Did the Researchers Do and Find?
The researchers measured mtDNA levels in the plasma of patients enrolled in two prospective observational cohort studies that monitored the outcomes of ICU patients. In the Brigham and Women's Hospital Registry of Critical Illness study, blood was taken from 200 patients within 24 hours of admission into the hospital's medical ICU. In the Molecular Epidemiology of Acute Respiratory Distress Syndrome study (acute respiratory distress syndrome is a life-threatening inflammatory reaction to lung damage or infection), blood was taken from 243 patients within 48 hours of admission into medical and non-medical ICUs at two other US hospitals. Patients admitted to medical ICUs with a raised mtDNA level (3,200 or more copies of a specific mitochondrial gene per microliter of plasma) had a 7- to 8-fold increased risk of dying within 28 days of admission compared to patients with mtDNA levels of less than 3,200 copies/µl plasma. There was no evidence of an association between raised mtDNA levels and death among patients admitted to non-medical ICUs. The addition of an elevated mtDNA level to a clinical model for risk prediction that included the APACHE II score and biomarkers that are already used to predict ICU outcomes improved the net reclassification index (an indicator of the improvement in risk prediction algorithms offered by new biomarkers) of 28-day mortality among medical ICU patients in both studies.
What Do These Findings Mean?
These findings indicate that raised mtDNA plasma levels are associated with death in medical ICUs and show that, among patients in medical ICUs, measurement of mtDNA plasma levels can improve the prediction of the risk of death from the APACHE II scoring system, even when commonly measured biomarkers are taken into account. These findings do not indicate whether circulating cell-free mtDNA increased because of the underlying severity of illness or whether mtDNA actively contributes to the disease process in medical ICU patients. Moreover, they do not provide any evidence that raised mtDNA levels are associated with an increased risk of death among non-medical (mainly surgical) ICU patients. These findings need to be confirmed in additional patients, but given the relative ease and rapidity of mtDNA measurement, the determination of circulating cell-free mtDNA levels could be a valuable addition to the assessment of patients admitted to medical ICUs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001577.
The UK National Health Service Choices website provides information about intensive care
The Society of Critical Care Medicine provides information for professionals, families, and patients about all aspects of intensive care
MedlinePlus provides links to other resources about intensive care (in English and Spanish)
The UK charity ICUsteps supports patients and their families through recovery from critical illness; its booklet Intensive Care: A Guide for Patients and Families is available in English and ten other languages; its website includes patient experiences and relative experiences of treatment in ICUs
Wikipedia has a page on ICU scoring systems (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001577
PMCID: PMC3876981  PMID: 24391478
17.  Red Blood Cell Transfusion and Mortality in Trauma Patients: Risk-Stratified Analysis of an Observational Study 
PLoS Medicine  2014;11(6):e1001664.
Using a large multicentre cohort, Pablo Perel and colleagues evaluate the association of red blood cell transfusion with mortality according to the predicted risk of death for trauma patients.
Please see later in the article for the Editors' Summary
Background
Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death.
Methods and Findings
A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%–20%, 21%–50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%–20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08–7.13, p<0.0001, and OR 2.31, 95% CI 1.96–2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47–0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05–3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar.
Conclusions
The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial.
Trial registration
www.ClinicalTrials.gov NCT01746953
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Trauma—a serious injury to the body caused by violence or an accident—is a major global health problem. Every year, injuries caused by traffic collisions, falls, blows, and other traumatic events kill more than 5 million people (9% of annual global deaths). Indeed, for people between the ages of 5 and 44 years, injuries are among the top three causes of death in many countries. Trauma sometimes kills people through physical damage to the brain and other internal organs, but hemorrhage (serious uncontrolled bleeding) is responsible for 30%–40% of trauma-related deaths. Consequently, early trauma care focuses on minimizing hemorrhage (for example, by using compression to stop bleeding) and on restoring blood circulation after blood loss (health-care professionals refer to this as resuscitation). Red blood cell (RBC) transfusion is often used for the management of patients with trauma who are bleeding; other resuscitation products include isotonic saline and solutions of human blood proteins.
Why Was This Study Done?
Although RBC transfusion can save the lives of patients with trauma who are bleeding, there is considerable uncertainty regarding the balance of risks and benefits associated with this procedure. RBC transfusion, which is an expensive intervention, is associated with several potential adverse effects, including allergic reactions and infections. Moreover, blood supplies are limited, and the risks from transfusion are high in low- and middle-income countries, where most trauma-related deaths occur. In this study, which is a secondary analysis of data from a trial (CRASH-2) that evaluated the effect of tranexamic acid (which stops excessive bleeding) in patients with trauma, the researchers test the hypothesis that RBC transfusion may have a beneficial effect among patients at high risk of death following trauma but a harmful effect among those at low risk of death.
What Did the Researchers Do and Find?
The CRASH-2 trail included 20,127 patients with trauma and major bleeding treated in 274 hospitals in 40 countries. In their risk-stratified analysis, the researchers investigated the effect of RBC transfusion on CRASH-2 participants with a predicted risk of death (estimated using a validated model that included clinical variables such as heart rate and blood pressure) on admission to hospital of less than 6%, 6%–20%, 21%–50%, or more than 50%. That is, the researchers compared death rates among patients in each stratum of predicted risk of death who received a RBC transfusion with death rates among patients who did not receive a transfusion. Half the patients received at least one transfusion. Transfusion was associated with an increase in all-cause mortality at 28 days after trauma among patients with a predicted risk of death of less than 6% or of 6%–20%, but with a decrease in all-cause mortality among patients with a predicted risk of death of more than 50%. In absolute figures, compared to no transfusion, RBC transfusion was associated with 5.1 more deaths per 100 patients in the patient group with the lowest predicted risk of death but with 11.9 fewer deaths per 100 patients in the group with the highest predicted risk of death.
What Do These Findings Mean?
These findings show that RBC transfusion is associated with an increase in all-cause deaths among patients with trauma and major bleeding with a low predicted risk of death, but with a reduction in all-cause deaths among patients with a high predicted risk of death. In other words, these findings suggest that the effect of RBC transfusion on all-cause mortality may vary according to whether a patient with trauma has a high or low predicted risk of death. However, because the participants in the CRASH-2 trial were not randomly assigned to receive a RBC transfusion, it is not possible to conclude that receiving a RBC transfusion actually increased the death rate among patients with a low predicted risk of death. It might be that the patients with this level of predicted risk of death who received a transfusion shared other unknown characteristics (confounders) that were actually responsible for their increased death rate. Thus, to provide better guidance for clinicians caring for patients with trauma and hemorrhage, the hypothesis that RBC transfusion could be harmful among patients with trauma with a low predicted risk of death should be prospectively evaluated in a randomised controlled trial.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001664.
This study is further discussed in a PLOS Medicine Perspective by Druin Burch
The World Health Organization provides information on injuries and on violence and injury prevention (in several languages)
The US Centers for Disease Control and Prevention has information on injury and violence prevention and control
The National Trauma Institute, a US-based non-profit organization, provides information about hemorrhage after trauma and personal stories about surviving trauma
The UK National Health Service Choices website provides information about blood transfusion, including a personal story about transfusion after a serious road accident
The US National Heart, Lung, and Blood Institute also provides detailed information about blood transfusions
MedlinePlus provides links to further resources on injuries, bleeding, and blood transfusion (in English and Spanish)
More information in available about CRASH-2 (in several languages)
doi:10.1371/journal.pmed.1001664
PMCID: PMC4060995  PMID: 24937305
18.  Combining Information from Common Type 2 Diabetes Risk Polymorphisms Improves Disease Prediction 
PLoS Medicine  2006;3(10):e374.
Background
A limited number of studies have assessed the risk of common diseases when combining information from several predisposing polymorphisms. In most cases, individual polymorphisms only moderately increase risk (~20%), and they are thought to be unhelpful in assessing individuals' risk clinically. The value of analyzing multiple alleles simultaneously is not well studied. This is often because, for any given disease, very few common risk alleles have been confirmed.
Methods and Findings
Three common variants (Lys23 of KCNJ11, Pro12 of PPARG, and the T allele at rs7903146 of TCF7L2) have been shown to predispose to type 2 diabetes mellitus across many large studies. Risk allele frequencies ranged from 0.30 to 0.88 in controls. To assess the combined effect of multiple susceptibility alleles, we genotyped these variants in a large case-control study (3,668 controls versus 2,409 cases). Individual allele odds ratios (ORs) ranged from 1.14 (95% confidence interval [CI], 1.05 to 1.23) to 1.48 (95% CI, 1.36 to 1.60). We found no evidence of gene-gene interaction, and the risks of multiple alleles were consistent with a multiplicative model. Each additional risk allele increased the odds of type 2 diabetes by 1.28 (95% CI, 1.21 to 1.35) times. Participants with all six risk alleles had an OR of 5.71 (95% CI, 1.15 to 28.3) compared to those with no risk alleles. The 8.1% of participants that were double-homozygous for the risk alleles at TCF7L2 and Pro12Ala had an OR of 3.16 (95% CI, 2.22 to 4.50), compared to 4.3% with no TCF7L2 risk alleles and either no or one Glu23Lys or Pro12Ala risk alleles.
Conclusions
Combining information from several known common risk polymorphisms allows the identification of population subgroups with markedly differing risks of developing type 2 diabetes compared to those obtained using single polymorphisms. This approach may have a role in future preventative measures for common, polygenic diseases.
Combining information from several known common risk polymorphisms allows the identification of subgroups of the population with markedly differing risks of developing type 2 diabetes.
Editors' Summary
Background.
Diabetes is an important and increasingly common global health problem; the World Health Organization has estimated that about 170 million people currently have diabetes worldwide. One particular form, type 2 diabetes, develops when cells in the body become unable to respond to a hormone called insulin. Insulin is normally released by the pancreas and controls the ability of body cells to take in glucose (sugar). Therefore, when cells become insensitive to insulin as in people with type 2 diabetes, glucose levels in the body are not well controlled and may become dangerously high in the blood. These high levels can have long-term damaging effects on various organs in the body, particularly the eyes, nerves, heart, and kidneys. There are many different factors that affect whether someone is likely to develop type 2 diabetes. These factors can be broadly grouped into two categories: environmental and genetic. Environmental factors such as obesity, a diet high in sugar, and a sedentary lifestyle are all risk factors for developing type 2 diabetes in later life. Genetically, a number of variants in many different genes may affect the risk of developing the disease. Generally, these gene variants are common in human populations but each gene variant only mildly increases the risk that a person possessing it will get type 2 diabetes.
Why Was This Study Done?
The investigators performing this study wanted to understand how different gene variants combine to affect an individual's risk of getting type 2 diabetes. That is, if a person carries many different variants, does their overall risk increase a lot or only a little?
What Did the Researchers Do and Find?
First, the researchers surveyed the published reports to identify those gene variants for which there was strong evidence of an association with type 2 diabetes. They found mutations in three genes that had been shown reproducibly to be associated with type 2 diabetes in different studies: PPARG (whose product is involved in regulation of fat tissue), KCNJ11 (whose product is involved in insulin production), and TCF7L2 (whose product is thought to be involved in controlling sugar levels). Then, they compared two groups of white people in the UK: 2,409 people with type 2 diabetes (“cases”), and 3,668 people from the general population (“controls”). The researchers compared the two groups to see which individuals possessed which gene variants, and did statistical testing to work out to what extent having particular combinations of the gene variants affected an individual's chance of being a “case” versus a “control.” Their results showed that in the groups studied, having an ever-increasing number of gene variants increased the risk of developing diabetes. The risk that someone with none of the gene variants would develop type 2 diabetes was about 2%, while the chance for someone with all gene variants was about10%.
What Do These Findings Mean?
These results show that the risk of developing type 2 diabetes is greater if an individual possesses all of the gene variants that were examined in this study. The analysis also suggests that using information on all three variants, rather than just one, is likely to be more accurate in predicting future risk. How this genetic information should be used alongside other well-known preventative measures such as altered lifestyle requires further study.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030374.
NHS Direct patient information on diabetes
National Diabetes Information Clearinghouse information on type 2 diabetes
World Health Organization Diabetes Programme
Centers for Disease ControlDiabetes Public Health Resource
doi:10.1371/journal.pmed.0030374
PMCID: PMC1584415  PMID: 17020404
19.  Critical appraisal of CRP measurement for the prediction of coronary heart disease events: new data and systematic review of 31 prospective cohorts 
Background Non-uniform reporting of relevant relationships and metrics hampers critical appraisal of the clinical utility of C-reactive protein (CRP) measurement for prediction of later coronary events.
Methods We evaluated the predictive performance of CRP in the Northwick Park Heart Study (NPHS-II) and the Edinburgh Artery Study (EAS) comparing discrimination by area under the ROC curve (AUC), calibration and reclassification. We set the findings in the context of a systematic review of published studies comparing different available and imputed measures of prediction. Risk estimates per-quantile of CRP were pooled using a random effects model to infer the shape of the CRP-coronary event relationship.
Results NPHS-II and EAS (3441 individuals, 309 coronary events): CRP alone provided modest discrimination for coronary heart disease (AUC 0.61 and 0.62 in NPHS-II and EAS, respectively) and only modest improvement in the discrimination of a Framingham-based risk score (FRS) (increment in AUC 0.04 and –0.01, respectively). Risk models based on FRS alone and FRS + CRP were both well calibrated and the net reclassification improvement (NRI) was 8.5% in NPHS-II and 8.8% in EAS with four risk categories, falling to 4.9% and 3.0% for 10-year coronary disease risk threshold of 15%. Systematic review (31 prospective studies 84 063 individuals, 11 252 coronary events): pooled inferred values for the AUC for CRP alone were 0.59 (0.57, 0.61), 0.59 (0.57, 0.61) and 0.57 (0.54, 0.61) for studies of <5, 5–10 and >10 years follow up, respectively. Evidence from 13 studies (7201 cases) indicated that CRP did not consistently improve performance of the Framingham risk score when assessed by discrimination, with AUC increments in the range 0–0.15. Evidence from six studies (2430 cases) showed that CRP provided statistically significant but quantitatively small improvement in calibration of models based on established risk factors in some but not all studies. The wide overlap of CRP values among people who later suffered events and those who did not appeared to be explained by the consistently log-normal distribution of CRP and a graded continuous increment in coronary risk across the whole range of values without a threshold, such that a large proportion of events occurred among the many individuals with near average levels of CRP.
Conclusions CRP does not perform better than the Framingham risk equation for discrimination. The improvement in risk stratification or reclassification from addition of CRP to models based on established risk factors is small and inconsistent. Guidance on the clinical use of CRP measurement in the prediction of coronary events may require updating in light of this large comparative analysis.
doi:10.1093/ije/dyn217
PMCID: PMC2639366  PMID: 18930961
C-reactive protein; prediction; coronary heart disease; primary prevention; risk stratification
20.  Reduced Glomerular Filtration Rate and Its Association with Clinical Outcome in Older Patients at Risk of Vascular Events: Secondary Analysis 
PLoS Medicine  2009;6(1):e1000016.
Background
Reduced glomerular filtration rate (GFR) is associated with increased cardiovascular risk in young and middle aged individuals. Associations with cardiovascular disease and mortality in older people are less clearly established. We aimed to determine the predictive value of the GFR for mortality and morbidity using data from the 5,804 participants randomized in the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER).
Methods and Findings
Glomerular filtration rate was estimated (eGFR) using the Modification of Diet in Renal Disease equation and was categorized in the ranges ([20–40], [40–50], [50–60]) ≥ 60 ml/min/1.73 m2. Baseline risk factors were analysed by category of eGFR, with and without adjustment for other risk factors. The associations between baseline eGFR and morbidity and mortality outcomes, accrued after an average of 3.2 y, were investigated using Cox proportional hazard models adjusting for traditional risk factors. We tested for evidence of an interaction between the benefit of statin treatment and baseline eGFR status. Age, low-density lipoprotein (LDL) and high-density lipoprotein (HDL) cholesterol, C-reactive protein (CRP), body mass index, fasting glucose, female sex, histories of hypertension and vascular disease were associated with eGFR (p = 0.001 or less) after adjustment for other risk factors. Low eGFR was independently associated with risk of all cause mortality, vascular mortality, and other noncancer mortality and with fatal and nonfatal coronary and heart failure events (hazard ratios adjusted for CRP and other risk factors (95% confidence intervals [CIs]) for eGFR < 40 ml/min/1.73m2 relative to eGFR ≥ 60 ml/min/1.73m2 respectively 2.04 (1.48–2.80), 2.37 (1.53–3.67), 3.52 (1.78–6.96), 1.64 (1.18–2.27), 3.31 (2.03–5.41). There were no nominally statistically significant interactions (p < 0.05) between randomized treatment allocation and eGFR for clinical outcomes, with the exception of the outcome of coronary heart disease death or nonfatal myocardial infarction (p = 0.021), with the interaction suggesting increased benefit of statin treatment in subjects with impaired GFRs.
Conclusions
We have established that, in an elderly population over the age of 70 y, impaired GFR is associated with female sex, with presence of vascular disease, and with levels of other risk factors that would be associated with increased risk of vascular disease. Further, impaired GFR is independently associated with significant levels of increased risk of all cause mortality and fatal vascular events and with composite fatal and nonfatal coronary and heart failure outcomes. Our analyses of the benefits of statin treatment in relation to baseline GFR suggest that there is no reason to exclude elderly patients with impaired renal function from treatment with a statin.
Using data from the PROSPER trial, Ian Ford and colleagues investigate whether reduced glomerular filtration rate is associated with cardiovascular and mortality risk among elderly people.
Editors' Summary
Background.
Cardiovascular disease (CVD)—disease that affects the heart and/or the blood vessels—is a common cause of death in developed countries. In the USA, for example, the single leading cause of death is coronary heart disease, a CVD in which narrowing of the heart's blood vessels slows or stops the blood supply to the heart and eventually causes a heart attack. Other types of CVD include stroke (in which narrowing of the blood vessels interrupts the brain's blood supply) and heart failure (a condition in which the heart can no longer pump enough blood to the rest of the body). Many factors increase the risk of developing CVD, including high blood pressure (hypertension), high blood cholesterol, having diabetes, smoking, and being overweight. Tools such as the “Framingham risk calculator” assess an individual's overall CVD risk by taking these and other risk factors into account. CVD risk can be minimized by taking drugs to reduce blood pressure or cholesterol levels (for example, pravastatin) and by making lifestyle changes.
Why Was This Study Done?
Another potential risk factor for CVD is impaired kidney (renal) function. In healthy people, the kidneys filter waste products and excess fluid out of the blood. A reduced “estimated glomerular filtration rate” (eGFR), which indicates impaired renal function, is associated with increased CVD in young and middle-aged people and increased all-cause and cardiovascular death in people who have vascular disease. But is reduced eGFR also associated with CVD and death in older people? If it is, it would be worth encouraging elderly people with reduced eGFR to avoid other CVD risk factors. In this study, the researchers determine the predictive value of eGFR for all-cause and vascular mortality (deaths caused by CVD) and for incident vascular events (a first heart attack, stroke, or heart failure) using data from the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER). This clinical trial examined pravastatin's effects on CVD development among 70–82 year olds with pre-existing vascular disease or an increased risk of CVD because of smoking, hypertension, or diabetes.
What Did the Researchers Do and Find?
The trial participants were divided into four groups based on their eGFR at the start of the study. The researchers then investigated the association between baseline CVD risk factors and baseline eGFR and between baseline eGFR and vascular events and deaths that occurred during the 3-year study. Several established CVD risk factors were associated with a reduced eGFR after allowing for other risk factors. In addition, people with a low eGFR (between 20 and 40 units) were twice as likely to die from any cause as people with an eGFR above 60 units (the normal eGFR for a young person is 100 units; eGFR decreases with age) and more than three times as likely to have nonfatal coronary heart disease or heart failure. A low eGFR also increased the risk of vascular mortality, other noncancer deaths, and fatal coronary heart disease and heart failure. Finally, pravastatin treatment reduced coronary heart disease deaths and nonfatal heart attacks most effectively among participants with the greatest degree of eGFR impairment.
What Do These Findings Mean?
These findings suggest that, in elderly people, impaired renal function is associated with levels of established CVD risk factors that increase the risk of vascular disease. They also suggest that impaired kidney function increases the risk of all-cause mortality, fatal vascular events, and fatal and nonfatal coronary heat disease and heart failure. Because the study participants were carefully chosen for inclusion in PROSPER, these findings may not be generalizable to all elderly people with vascular disease or vascular disease risk factors. Nevertheless, increased efforts should probably be made to encourage elderly people with reduced eGFR and other vascular risk factors to make lifestyle changes to reduce their overall CVD risk. Finally, although the effect of statins in elderly patients with renal dysfunction needs to be examined further, these findings suggest that this group of patients should benefit at least as much from statins as elderly patients with healthy kidneys.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000016.
The MedlinePlus Encyclopedia has pages on coronary heart disease, stroke, and heart failure (in English and Spanish)
MedlinePlus provides links to many other sources of information on heart disease, vascular disease, and stroke (in English and Spanish)
The US National Institute of Diabetes and Digestive and Kidney Diseases provides information on how the kidneys work and what can go wrong with them, including a list of links to further information about kidney disease
The American Heart Association provides information on all aspects of cardiovascular disease for patients, caregivers, and professionals (in several languages)
More information about PROSPER is available on the Web site of the Vascular Biochemistry Department of the University of Glasgow
doi:10.1371/journal.pmed.1000016
PMCID: PMC2628400  PMID: 19166266
21.  Prognostic value, clinical effectiveness and cost-effectiveness of high sensitivity C-reactive protein as a marker in primary prevention of major cardiac events 
Background
In a substantial portion of patients (= 25%) with coronary heart disease (CHD), a myocardial infarction or sudden cardiac death without prior symptoms is the first manifestation of disease. The use of new risk predictors for CHD such as the high-sensitivity C-reactive Protein (hs-CRP) in addition to established risk factors could improve prediction of CHD. As a consequence of the altered risk assessment, modified preventive actions could reduce the number of cardiac death and non-fatal myocardial infarction.
Research question
Does the additional information gained through the measurement of hs-CRP in asymptomatic patients lead to a clinically relevant improvement in risk prediction as compared to risk prediction based on traditional risk factors and is this cost-effective?
Methods
A literature search of the electronic databases of the German Institute of Medical Documentation and Information (DIMDI) was conducted. Selection, data extraction, assessment of the study-quality and synthesis of information was conducted according to the methods of evidence-based medicine.
Results
Eight publications about predictive value, one publication on the clinical efficacy and three health-economic evaluations were included. In the seven study populations of the prediction studies, elevated CRP-levels were almost always associated with a higher risk of cardiovascular events and non-fatal myocardial infarctions or cardiac death and severe cardiovascular events. The effect estimates (odds ratio (OR), relative risk (RR), hazard ratio (HR)), once adjusted for traditional risk factors, demonstrated a moderate, independent association between hs-CRP and cardiac and cardiovascular events that fell in the range of 0.7 to 2.47. In six of the seven studies, a moderate increase in the area under the curve (AUC) could be detected by adding hs-CRP as a predictor to regression models in addition to established risk factors though in three cases this was not statistically significant. The difference [in the AUC] between the models with and without hs-CRP fell between 0.00 and 0.023 with a median of 0.003. A decision-analytic modeling study reported a gain in life-expectancy for those using statin therapy for populations with elevated hs-CRP levels and normal lipid levels as compared to statin therapy for those with elevated lipid levels (approximately 6.6 months gain in life-expectancy for 58 year olds). Two decision-analytic models (three publications) on cost-effectiveness reported incremental cost-effectiveness ratios between Euro 8,700 and 50,000 per life year gained for the German context and between 52,000 and 708,000 for the US context. The empirical input data for the model is highly uncertain.
Conclusion
No sufficient evidence is available to support the notion that hs-CRP-values should be measured during the global risk assessment for CAD or cardiovascular disease in addition to the traditional risk factors. The additional measurement of the hs-CRP-level increases the incremental predictive value of the risk prediction. It has not yet been clarified whether this increase is clinically relevant resulting in reduction of cardiovascular morbidity and mortality.
For people with medium cardiovascular risk (5 to 20% in ten years) additional measurement of hs-CRP seems most likely to be clinical relevant to support the decision as to whether or not additional statin therapy should be initiated for primary prevention.
Statin therapy can reduce the occurrence of cardiovascular events for asymptomatic individuals with normal lipid and elevated hs-CRP levels. However, this is not enough to provide evidence for a clinical benefit of hs-CRP-screening. The cost-effectiveness of general hs-CRP-screening as well as screening among only those with normal lipid levels remains unknown at present.
doi:10.3205/hta000068
PMCID: PMC3011282  PMID: 21289893
22.  Repeat Bone Mineral Density Screening and Prediction of Hip and Major Osteoporotic Fracture 
IMPORTANCE
Screening for osteoporosis with bone mineral density (BMD) is recommended for older adults. It is unclear whether repeating a BMD screening test improves fracture risk assessment.
OBJECTIVES
To determine whether changes in BMD after 4 years provide additional information on fracture risk beyond baseline BMD and to quantify the change in fracture risk classification after a second BMD measure.
DESIGN, SETTING, AND PARTICIPANTS
Population-based cohort study involving 310 men and 492 women from the Framingham Osteoporosis Study with 2 measures of femoral neck BMD taken from 1987 through 1999.
MAIN OUTCOMES AND MEASURES
Risk of hip or major osteoporotic fracture through 2009 or 12 years following the second BMD measure.
RESULTS
Mean age was 74.8 years. The mean (SD) BMD change was −0.6% per year (1.8%). Throughout a median follow-up of 9.6 years, 76 participants experienced an incident hip fracture and 113 participants experienced a major osteoporotic fracture. Annual percent BMD change per SD decrease was associated with risk of hip fracture (hazard ratio [HR], 1.43 [95% CI, 1.16 to 1.78]) and major osteoporotic fracture (HR, 1.21 [95% CI, 1.01 to 1.45]) after adjusting for baseline BMD. At 10 years’ follow-up, 1 SD decrease in annual percent BMD change compared with the mean BMD change was associated with 3.9 excess hip fractures per 100 persons. In receiver operating characteristic (ROC) curve analyses, the addition of BMD change to a model with baseline BMD did not meaningfully improve performance. The area under the curve (AUC) was 0.71 (95% CI, 0.65 to 0.78) for the baseline BMD model compared with 0.68 (95% CI, 0.62 to 0.75) for the BMD percent change model. Moreover, the addition of BMD change to a model with baseline BMD did not meaningfully improve performance (AUC, 0.72 [95% CI, 0.66 to 0.79]). Using the net reclassification index, a second BMD measure increased the proportion of participants reclassified as high risk of hip fracture by 3.9% (95% CI, −2.2% to 9.9%), whereas it decreased the proportion classified as low risk by −2.2% (95% CI, −4.5% to 0.1%).
CONCLUSIONS AND RELEVANCE
In untreated men and women of mean age 75 years, a second BMD measure after 4 years did not meaningfully improve the prediction of hip or major osteoporotic fracture. Repeating a BMD measure within 4 years to improve fracture risk stratification may not be necessary in adults this age untreated for osteoporosis.
doi:10.1001/jama.2013.277817
PMCID: PMC3903386  PMID: 24065012
23.  A Six-Gene Signature Predicts Survival of Patients with Localized Pancreatic Ductal Adenocarcinoma 
PLoS Medicine  2010;7(7):e1000307.
Jen Jen Yeh and colleagues developed and validated a six-gene signature in patients with pancreatic ductal adenocarcinoma that may be used to better stage the disease in these patients and assist in treatment decisions.
Background
Pancreatic ductal adenocarcinoma (PDAC) remains a lethal disease. For patients with localized PDAC, surgery is the best option, but with a median survival of less than 2 years and a difficult and prolonged postoperative course for most, there is an urgent need to better identify patients who have the most aggressive disease.
Methods and Findings
We analyzed the gene expression profiles of primary tumors from patients with localized compared to metastatic disease and identified a six-gene signature associated with metastatic disease. We evaluated the prognostic potential of this signature in a training set of 34 patients with localized and resected PDAC and selected a cut-point associated with outcome using X-tile. We then applied this cut-point to an independent test set of 67 patients with localized and resected PDAC and found that our signature was independently predictive of survival and superior to established clinical prognostic factors such as grade, tumor size, and nodal status, with a hazard ratio of 4.1 (95% confidence interval [CI] 1.7–10.0). Patients defined to be high-risk patients by the six-gene signature had a 1-year survival rate of 55% compared to 91% in the low-risk group.
Conclusions
Our six-gene signature may be used to better stage PDAC patients and assist in the difficult treatment decisions of surgery and to select patients whose tumor biology may benefit most from neoadjuvant therapy. The use of this six-gene signature should be investigated in prospective patient cohorts, and if confirmed, in future PDAC clinical trials, its potential as a biomarker should be investigated. Genes in this signature, or the pathways that they fall into, may represent new therapeutic targets.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Pancreatic cancer kills nearly a quarter of a million people every year. It begins when a cell in the pancreas (an organ lying behind the stomach that produces digestive enzymes and hormones such as insulin, which controls blood sugar levels) acquires genetic changes that allow it to grow uncontrollably and to spread around the body (metastasize). Nearly all pancreatic cancers are “pancreatic ductal adenocarcinomas” (PDACs)—tumors that start in the cells that line the tubes in the pancreas that take digestive juices to the gut. Because PDAC rarely causes any symptoms early in its development, it has already metastasized in about half of patients before it is diagnosed. Consequently, the average survival time after a diagnosis of PDAC is only 5–8 months. At present, the only chance for cure is surgical removal (resection) of the tumor, part of the pancreas, and other nearby digestive organs. The operation that is needed for the majority of patients—the Whipple procedure—is only possible in the fifth of patients whose tumor is found when it is small enough to be resectable but even with postoperative chemotherapy, these patients only live for 23 months after surgery on average, possibly because they have micrometastases at the time of their operation.
Why Was This Study Done?
Despite this poor overall outcome, about a quarter of patients with resectable PDAC survive for more than 5 years after surgery. Might some patients, therefore, have a less aggressive form of PDAC determined by the biology of the primary (original) tumor? If this is the case, it would be useful to be able to stratify patients according to the aggressiveness of their disease so that patients with very aggressive disease could be given chemotherapy before surgery (neoadjuvant therapy) to kill any micrometastases. At present neoadjuvant therapy is given to patients with locally advanced, unresectable tumors. In this study, the researchers compare gene expression patterns in primary tumor samples collected from patients with localized PDAC and from patients with metastatic PDAC between 1999 and 2007 to try to identify molecular markers that distinguish between more and less aggressive PDACs.
What Did the Researchers Do and Find?
The researchers identified a six-gene signature that was associated with metastatic disease using a molecular biology approach called microarray hybridization and a statistical method called significance analysis of microarrays to analyze gene expression patterns in primary tumor samples from 15 patients with localized PDAC and 15 patients with metastatic disease. Next, they used a training set of tumor samples from another 34 patients with localized and resected PDAC, microarray hybridization, and a graphical method called X-tile to select a combination of expression levels of the six genes that discriminated optimally between high-risk (aggressive) and low-risk (less aggressive) tumors on the basis of patient survival (a “cut-point”). When the researchers applied this cut-point to an independent set of 67 tumor samples from patients with localized and resected PDAC, they found that 42 patients had high-risk tumors. These patients had an average survival time of 15 months; 55% of them were alive a year after surgery. The remaining 25 patients, who had low-risk tumors, had an average survival time of 49 months and 91% of them were alive a year after resection.
What Do These Findings Mean?
These and other findings identify a six-gene signature that can predict outcomes in patients with localized, resectable PDAC better than, and independently of, established clinical markers of outcome. If the predictive ability of this signature can be confirmed in additional patients, it could be used to help patients make decisions about their treatment. For example, a patient wondering whether to risk the Whipple procedure (2%–6% of patients die during this operation and more than 50% have serious postoperative complications), the knowledge that their tumor was low risk might help them decide to have the operation. Conversely, a patient in poor health with a high-risk tumor might decide to spare themselves the trauma of major surgery. The six-gene signature might also help clinicians decide which patients would benefit most from neoadjuvant therapy. Finally, the genes in this signature, or the biological pathways in which they participate, might represent new therapeutic targets for the treatment of PDAC.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000307.
The US National Cancer Institute provides information for patients and health professionals about all aspects of pancreatic cancer (in English and Spanish), including a booklet for patients
The American Cancer Society also provides detailed information about pancreatic cancer
The UK National Health Service and Cancer Research UK include information for patients on pancreatic cancer on their Web sites
MedlinePlus provides links to further resources on pancreatic cancer (in English and Spanish)
Cure Pancreatic Cancer provides information about scientific and medical research related to the diagnosis, treatment, cure, and prevention of pancreatic cancer
Pancreatic Cancer Action Network is a US organization that supports research, patient support, community outreach, and advocacy for a cure for pancreatic cancer
doi:10.1371/journal.pmed.1000307
PMCID: PMC2903589  PMID: 20644708
24.  Social Relationships and Mortality Risk: A Meta-analytic Review 
PLoS Medicine  2010;7(7):e1000316.
In a meta-analysis, Julianne Holt-Lunstad and colleagues find that individuals' social relationships have as much influence on mortality risk as other well-established risk factors for mortality, such as smoking.
Background
The quality and quantity of individuals' social relationships has been linked not only to mental health but also to both morbidity and mortality.
Objectives
This meta-analytic review was conducted to determine the extent to which social relationships influence risk for mortality, which aspects of social relationships are most highly predictive, and which factors may moderate the risk.
Data Extraction
Data were extracted on several participant characteristics, including cause of mortality, initial health status, and pre-existing health conditions, as well as on study characteristics, including length of follow-up and type of assessment of social relationships.
Results
Across 148 studies (308,849 participants), the random effects weighted average effect size was OR = 1.50 (95% CI 1.42 to 1.59), indicating a 50% increased likelihood of survival for participants with stronger social relationships. This finding remained consistent across age, sex, initial health status, cause of death, and follow-up period. Significant differences were found across the type of social measurement evaluated (p<0.001); the association was strongest for complex measures of social integration (OR = 1.91; 95% CI 1.63 to 2.23) and lowest for binary indicators of residential status (living alone versus with others) (OR = 1.19; 95% CI 0.99 to 1.44).
Conclusions
The influence of social relationships on risk for mortality is comparable with well-established risk factors for mortality.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Humans are naturally social. Yet, the modern way of life in industrialized countries is greatly reducing the quantity and quality of social relationships. Many people in these countries no longer live in extended families or even near each other. Instead, they often live on the other side of the country or even across the world from their relatives. Many also delay getting married and having children. Likwise, more and more people of all ages in developed countries are living alone, and loneliness is becoming increasingly common. In the UK, according to a recent survey by the Mental Health Foundation, 10% of people often feel lonely, a third have a close friend or relative who they think is very lonely, and half think that people are getting lonelier in general. Similarly, across the Atlantic, over the past two decades there has been a three-fold increase in the number of Americans who say they have no close confidants. There is reason to believe that people are becoming more socially isolated.
Why Was This Study Done?
Some experts think that social isolation is bad for human health. They point to a 1988 review of five prospective studies (investigations in which the characteristics of a population are determined and then the population is followed to see whether any of these characteristics are associated with specific outcomes) that showed that people with fewer social relationships die earlier on average than those with more social relationships. But, even though many prospective studies of mortality (death) have included measures of social relationships since that first review, the idea that a lack of social relationships is a risk factor for death is still not widely recognized by health organizations and the public. In this study, therefore, the researchers undertake a systematic review and meta-analysis of the relevant literature to determine the extent to which social relationships influence mortality risk and which aspects of social relationships are most predictive of mortality. A systematic review uses predefined criteria to identify all the research on a given topic; a meta-analysis uses statistical methods to combine the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 148 prospective studies that provided data on individuals' mortality as a function of social relationships and extracted an “effect size” from each study. An effect size quantifies the size of a difference between two groups—here, the difference in the likelihood of death between groups that differ in terms of their social relationships. The researchers then used a statistical method called “random effects modeling” to calculate the average effect size of the studies expressed as an odds ratio (OR)—the ratio of the chances of an event happening in one group to the chances of the same event happening in the second group. They report that the average OR was 1.5. That is, people with stronger social relationships had a 50% increased likelihood of survival than those with weaker social relationships. Put another way, an OR of 1.5 means that by the time half of a hypothetical sample of 100 people has died, there will be five more people alive with stronger social relationships than people with weaker social relationships. Importantly, the researchers also report that social relationships were more predictive of the risk of death in studies that considered complex measurements of social integration than in studies that considered simple evaluations such as marital status.
What Do These Findings Mean?
These findings indicate that the influence of social relationships on the risk of death are comparable with well-established risk factors for mortality such as smoking and alcohol consumption and exceed the influence of other risk factors such as physical inactivity and obesity. Furthermore, the overall effect of social relationships on mortality reported in this meta-analysis might be an underestimate, because many of the studies used simple single-item measures of social isolation rather than a complex measurement. Although further research is needed to determine exactly how social relationships can be used to reduce mortality risk, physicians, health professionals, educators, and the media should now acknowledge that social relationships influence the health outcomes of adults and should take social relationships as seriously as other risk factors that affect mortality, the researchers conclude.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000316.
The Mental Health America Live Your Life Well page includes information about how social relationships improve both mental and physical health
The Mental Health Foundation, a UK charity, has information on loneliness and mental health; its report “The Lonely Society?” can be downloaded from this page
The Mayo Clinic has information on social support as a way to manage stress
The Pew Research Foundation has information on technology and social isolation
Wikipedia has a page on social isolation (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1000316
PMCID: PMC2910600  PMID: 20668659
25.  Reporting and Methods in Clinical Prediction Research: A Systematic Review 
PLoS Medicine  2012;9(5):e1001221.
Walter Bouwmeester and colleagues investigated the reporting and methods of prediction studies in 2008, in six high-impact general medical journals, and found that the majority of prediction studies do not follow current methodological recommendations.
Background
We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.
Methods and Findings
We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.
Conclusions
The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
There are often times in our lives when we would like to be able to predict the future. Is the stock market going to go up, for example, or will it rain tomorrow? Being able predict future health is also important, both to patients and to physicians, and there is an increasing body of published clinical “prediction research.” Diagnostic prediction research investigates the ability of variables or test results to predict the presence or absence of a specific diagnosis. So, for example, one recent study compared the ability of two imaging techniques to diagnose pulmonary embolism (a blood clot in the lungs). Prognostic prediction research investigates the ability of various markers to predict future outcomes such as the risk of a heart attack. Both types of prediction research can investigate the predictive properties of patient characteristics, single variables, tests, or markers, or combinations of variables, tests, or markers (multivariable studies). Both types of prediction research can include also studies that build multivariable prediction models to guide patient management (model development), or that test the performance of models (validation), or that quantify the effect of using a prediction model on patient and physician behaviors and outcomes (impact assessment).
Why Was This Study Done?
With the increase in prediction research, there is an increased interest in the methodology of this type of research because poorly done or poorly reported prediction research is likely to have limited reliability and applicability and will, therefore, be of little use in patient management. In this systematic review, the researchers investigate the reporting and methods of prediction studies by examining the aims, design, participant selection, definition and measurement of outcomes and candidate predictors, statistical power and analyses, and performance measures included in multivariable prediction research articles published in 2008 in several general medical journals. In a systematic review, researchers identify all the studies undertaken on a given topic using a predefined set of criteria and systematically analyze the reported methods and results of these studies.
What Did the Researchers Do and Find?
The researchers identified all the multivariable prediction studies meeting their predefined criteria that were published in 2008 in six high impact general medical journals by browsing through all the issues of the journals (a hand search). They then scored the methods and reporting of each study using a comprehensive item list based on recent recommendations for the conduct of prediction research (for example, the reporting recommendations for tumor marker prognostic studies—the REMARK guidelines). Of 71 retrieved studies, 51 were predictor finding studies, 14 were prediction model development studies, three externally validated an existing model, and three reported on a model's impact on participant outcome. Study design, participant selection, definitions of outcomes and predictors, and predictor selection were generally well reported, but other methodological and reporting aspects of the studies were suboptimal. For example, despite many recommendations, continuous predictors were often dichotomized. That is, rather than using the measured value of a variable in a prediction model (for example, blood pressure in a cardiovascular disease prediction model), measurements were frequently assigned to two broad categories. Similarly, many of the studies failed to adequately estimate the sample size needed to minimize bias in predictor effects, and few of the model development papers quantified and validated the proposed model's predictive performance.
What Do These Findings Mean?
These findings indicate that, in 2008, most of the prediction research published in high impact general medical journals failed to follow current guidelines for the conduct and reporting of clinical prediction studies. Because the studies examined here were published in high impact medical journals, they are likely to be representative of the higher quality studies published in 2008. However, reporting standards may have improved since 2008, and the conduct of prediction research may actually be better than this analysis suggests because the length restrictions that are often applied to journal articles may account for some of reporting omissions. Nevertheless, despite some encouraging findings, the researchers conclude that the poor reporting and poor methods they found in many published prediction studies is a cause for concern and is likely to limit the reliability and applicability of this type of clinical research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001221.
The EQUATOR Network is an international initiative that seeks to improve the reliability and value of medical research literature by promoting transparent and accurate reporting of research studies; its website includes information on a wide range of reporting guidelines including the REMARK recommendations (in English and Spanish)
A video of a presentation by Doug Altman, one of the researchers of this study, on improving the reporting standards of the medical evidence base, is available
The Cochrane Prognosis Methods Group provides additional information on the methodology of prognostic research
doi:10.1371/journal.pmed.1001221
PMCID: PMC3358324  PMID: 22629234

Results 1-25 (1314957)