PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1403118)

Clipboard (0)
None

Related Articles

1.  A Unifying Framework for Evaluating the Predictive Power of Genetic Variants Based on the Level of Heritability Explained 
PLoS Genetics  2010;6(12):e1001230.
An increasing number of genetic variants have been identified for many complex diseases. However, it is controversial whether risk prediction based on genomic profiles will be useful clinically. Appropriate statistical measures to evaluate the performance of genetic risk prediction models are required. Previous studies have mainly focused on the use of the area under the receiver operating characteristic (ROC) curve, or AUC, to judge the predictive value of genetic tests. However, AUC has its limitations and should be complemented by other measures. In this study, we develop a novel unifying statistical framework that connects a large variety of predictive indices together. We showed that, given the overall disease probability and the level of variance in total liability (or heritability) explained by the genetic variants, we can estimate analytically a large variety of prediction metrics, for example the AUC, the mean risk difference between cases and non-cases, the net reclassification improvement (ability to reclassify people into high- and low-risk categories), the proportion of cases explained by a specific percentile of population at the highest risk, the variance of predicted risks, and the risk at any percentile. We also demonstrate how to construct graphs to visualize the performance of risk models, such as the ROC curve, the density of risks, and the predictiveness curve (disease risk plotted against risk percentile). The results from simulations match very well with our theoretical estimates. Finally we apply the methodology to nine complex diseases, evaluating the predictive power of genetic tests based on known susceptibility variants for each trait.
Author Summary
Recently many genetic variants have been established for diseases, and the findings have raised hope for risk prediction based on genomic profiles. However, we need to have proper statistical measures to assess the usefulness of such tests. In this study, we developed a statistical framework which enables us to evaluate many predictive indices analytically. It is based on the liability threshold model, which postulates a latent liability that is normally distributed. Affected individuals are assumed to have a liability exceeding a certain threshold. We demonstrated that, given the overall disease probability and variance in liability explained by the genetic markers, we can compute a variety of predictive indices. An example is the area under the receiver operating characteristic (ROC) curve, or AUC, which is very commonly employed. However, the limitations of AUC are often ignored, and we proposed complementing it with other indices. We have therefore also computed other metrics like the average difference in risks between cases and non-cases, the ability of reclassification into high- and low-risk categories, and the proportion of cases accounted for by a certain percentile of population at the highest risk. We also derived how to construct graphs showing the risk distribution in population.
doi:10.1371/journal.pgen.1001230
PMCID: PMC2996330  PMID: 21151957
2.  Analysis of Biomarker Data: logs, odds ratios and ROC curves 
Current opinion in HIV and AIDS  2010;5(6):473-479.
Purpose of review
We discuss two data analysis issues for studies that use binary clinical outcomes (whether or not an event occurred): the choice of an appropriate scale and transformation when biomarkers are evaluated as explanatory factors in logistic regression; and assessing the ability of biomarkers to improve prediction accuracy for event risk.
Recent findings
Biomarkers with skewed distributions should be transformed before they are included as continuous covariates in logistic regression models. The utility of new biomarkers may be assessed by measuring the improvement in predicting event risk after adding the biomarkers to an existing model. The area under the receiver operating characteristic (ROC) curve (C-statistic) is often cited; it was developed for a different purpose, however, and may not address the clinically relevant questions. Measures of risk reclassification and risk prediction accuracy may be more appropriate.
Summary
The appropriate analysis of biomarkers depends on the research question. Odds ratios obtained from logistic regression describe associations of biomarkers with clinical events; failure to accurately transform the markers, however, may result in misleading estimates. Whilst the C-statistic is often used to assess the ability of new biomarkers to improve the prediction of event risk, other measures may be more suitable.
doi:10.1097/COH.0b013e32833ed742
PMCID: PMC3157029  PMID: 20978390
biomarker analysis; odds ratio; ROC curve; risk prediction accuracy; C-statistic
3.  Lipoprotein Metabolism Indicators Improve Cardiovascular Risk Prediction 
PLoS ONE  2014;9(3):e92840.
Background
Cardiovascular disease risk increases when lipoprotein metabolism is dysfunctional. We have developed a computational model able to derive indicators of lipoprotein production, lipolysis, and uptake processes from a single lipoprotein profile measurement. This is the first study to investigate whether lipoprotein metabolism indicators can improve cardiovascular risk prediction and therapy management.
Methods and Results
We calculated lipoprotein metabolism indicators for 1981 subjects (145 cases, 1836 controls) from the Framingham Heart Study offspring cohort in which NMR lipoprotein profiles were measured. We applied a statistical learning algorithm using a support vector machine to select conventional risk factors and lipoprotein metabolism indicators that contributed to predicting risk for general cardiovascular disease. Risk prediction was quantified by the change in the Area-Under-the-ROC-Curve (ΔAUC) and by risk reclassification (Net Reclassification Improvement (NRI) and Integrated Discrimination Improvement (IDI)). Two VLDL lipoprotein metabolism indicators (VLDLE and VLDLH) improved cardiovascular risk prediction. We added these indicators to a multivariate model with the best performing conventional risk markers. Our method significantly improved both CVD prediction and risk reclassification.
Conclusions
Two calculated VLDL metabolism indicators significantly improved cardiovascular risk prediction. These indicators may help to reduce prescription of unnecessary cholesterol-lowering medication, reducing costs and possible side-effects. For clinical application, further validation is required.
doi:10.1371/journal.pone.0092840
PMCID: PMC3965475  PMID: 24667559
4.  Assessment of Clinical Validity of a Breast Cancer Risk Model Combining Genetic and Clinical Information 
Background
The Gail model is widely used for the assessment of risk of invasive breast cancer based on recognized clinical risk factors. In recent years, a substantial number of single-nucleotide polymorphisms (SNPs) associated with breast cancer risk have been identified. However, it remains unclear how to effectively integrate clinical and genetic risk factors for risk assessment.
Methods
Seven SNPs associated with breast cancer risk were selected from the literature and genotyped in white non-Hispanic women in a nested case–control cohort of 1664 case patients and 1636 control subjects within the Women’s Health Initiative Clinical Trial. SNP risk scores were computed based on previously published odds ratios assuming a multiplicative model. Combined risk scores were calculated by multiplying Gail risk estimates by the SNP risk scores. The independence of Gail risk and SNP risk was evaluated by logistic regression. Calibration of relative risks was evaluated using the Hosmer–Lemeshow test. The performance of the combined risk scores was evaluated using receiver operating characteristic curves. The net reclassification improvement (NRI) was used to assess improvement in classification of women into low (<1.5%), intermediate (1.5%–2%), and high (>2%) categories of 5-year risk. All tests of statistical significance were two-sided.
Results
The SNP risk score was nearly independent of Gail risk. There was good agreement between predicted and observed SNP relative risks. In the analysis for receiver operating characteristic curves, the combined risk score was more discriminating, with area under the curve of 0.594 compared with area under the curve of 0.557 for Gail risk alone (P < .001). Classification also improved for 5.6% of case patients and 2.9% of control subjects, showing an NRI value of 0.085 (P = 1.0 × 10−5). Focusing on women with intermediate Gail risk resulted in an improved NRI of 0.195 (P = 8.6 × 10−5).
Conclusions
Combining validated common genetic risk factors with clinical risk factors resulted in modest improvement in classification of breast cancer risks in white non-Hispanic postmenopausal women. Classification performance was further improved by focusing on women at intermediate risk.
doi:10.1093/jnci/djq388
PMCID: PMC2970578  PMID: 20956782
5.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
6.  Inflammatory Markers and Poor Outcome after Stroke: A Prospective Cohort Study and Systematic Review of Interleukin-6 
PLoS Medicine  2009;6(9):e1000145.
In a prospective cohort study of patient outcomes following stroke, William Whiteley and colleagues find that markers of inflammatory response are associated with poor outcomes. However, addition of these markers to existing prognostic models does not improve outcome prediction.
Background
The objective of this study was to determine whether: (a) markers of acute inflammation (white cell count, glucose, interleukin-6, C-reactive protein, and fibrinogen) are associated with poor outcome after stroke and (b) the addition of markers to previously validated prognostic models improves prediction of poor outcome.
Methods and Findings
We prospectively recruited patients between 2002 and 2005. Clinicians assessed patients and drew blood for inflammatory markers. Patients were followed up by postal questionnaire for poor outcome (a score of>2 on the modified Rankin Scale) and death through the General Register Office (Scotland) at 6 mo. We performed a systematic review of the literature and meta-analysis of the association between interleukin-6 and poor outcome after stroke to place our study in the context of previous research. We recruited 844 patients; mortality data were available in 844 (100%) and functional outcome in 750 (89%). After appropriate adjustment, the odds ratios for the association of markers and poor outcome (comparing the upper and the lower third) were interleukin-6, 3.1 (95% CI: 1.9–5.0); C-reactive protein, 1.9 (95% CI: 1.2–3.1); fibrinogen, 1.5 (95% CI: 1.0–2.36); white cell count, 2.1 (95% CI: 1.3–3.4); and glucose 1.3 (95% CI: 0.8–2.1). The results for interleukin-6 were similar to other studies. However, the addition of inflammatory marker levels to validated prognostic models did not materially improve model discrimination, calibration, or reclassification for prediction of poor outcome after stroke.
Conclusions
Raised levels of markers of the acute inflammatory response after stroke are associated with poor outcomes. However, the addition of these markers to a previously validated stroke prognostic model did not improve the prediction of poor outcome. Whether inflammatory markers are useful in prediction of recurrent stroke or other vascular events is a separate question, which requires further study.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, 15 million people have a stroke. In the US alone, someone has a stroke every 40 seconds and someone dies from a stroke every 3–4 minutes. Stroke occurs when the blood supply to the brain is suddenly interrupted by a blood clot blocking a blood vessel in the brain (ischemic stroke, the commonest type of stroke) or by a blood vessel in the brain bursting (hemorrhagic stroke). Deprived of the oxygen normally carried to them by the blood, the brain cells near the blockage die. The symptoms of stroke depend on which part of the brain is damaged but include sudden weakness or paralysis along one side of the body, vision loss in one or both eyes, and confusion or trouble speaking or understanding speech. Anyone experiencing these symptoms should seek medical assistance immediately because prompt treatment can limit the damage to the brain. Risk factors for stroke include age (three-quarters of strokes occur in people over 65 years old), high blood pressure, and heart disease.
Why Was This Study Done?
Many people are left with permanent disabilities after a stroke. An accurate way to predict the likely long-term outcome (prognosis) for individual patients would help clinicians manage their patients and help relatives and patients come to terms with their changed circumstances. Clinicians can get some idea of their patients' likely outcomes by assessing six simple clinical variables. These include the ability to lift both arms and awareness of the present situation. But could the inclusion of additional variables improve the predictive power of this simple prognostic model? There is some evidence that high levels in the blood of inflammatory markers (for example, interleukin-6 and C-reactive protein) are associated with poor outcomes after stroke—inflammation is the body's response to infection and to damage. In this prospective cohort study, the researchers investigate whether inflammatory markers are associated with poor outcome after stroke and whether the addition of these markers to the six-variable prognostic model improves its predictive power. Prospective cohort studies enroll a group of participants and follow their subsequent progress.
What Did the Researchers Do and Find?
The researchers recruited 844 patients who had had a stroke (mainly mild ischemic strokes) in Edinburgh. Each patient was assessed soon after the stroke by a clinician and blood was taken for the measurement of inflammatory markers. Six months after the stroke, the patient or their relatives completed a postal questionnaire that assessed their progress. Information about patient deaths was obtained from the General Register Office for Scotland. Dependency on others for the activities of daily life or dying was recorded as a poor outcome. In their statistical analysis of these data, the researchers found that raised levels of several inflammatory markers increased the likelihood of a poor outcome. For example, after allowing for age and other factors, individuals with interleukin-6 levels in the upper third of the measured range were three times as likely to have a poor outcome as patients with interleukin-6 levels in the bottom third of the range. A systematic search of the literature revealed that previous studies that had looked at the potential association between interleukin-6 levels and outcome after stroke had found similar results. Finally, the researchers found that the addition of inflammatory marker levels to the six-variable prognostic model did not substantially improve its ability to predict outcome after stroke for this cohort of patients.
What Do These Findings Mean?
These findings provide additional support for the idea that increased levels of inflammatory markers are associated with a poor outcome after stroke. However, because patients with infections were not excluded from the study, infection may be responsible for part of the observed association. Importantly, these findings also show that although the inclusion of inflammatory markers in the six variable prognostic model slightly improves its ability to predict outcome, the magnitude of this improvement is too small to warrant the use of these markers in routine practice. Whether the measurement of inflammatory markers might be useful in the prediction of recurrent stroke—at least a quarter of people who survive a stroke will have another one within 5 years—requires further study.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000145.
This study is further discussed in a PLoS Medicine Perspective by Len Kritharides
The US National Institute of Neurological Disorders and Stroke provides information about all aspects of stroke (in English and Spanish); the Know Stroke site provides educational materials about stroke prevention, treatment, and rehabilitation (in English and Spanish)
The Internet Stroke Center provides detailed information about stroke for patients, families and health professionals (in English and Spanish)
The UK National Health Service also provides information for patients and their families about stroke (in several languages)
MedlinePlus provides links to further resources and advice about stroke (in English and Spanish)
The six simple variable model for prediction of death or disability after stroke is available here: http://dcnapp1.dcn.ed.ac.uk/scope/
doi:10.1371/journal.pmed.1000145
PMCID: PMC2730573  PMID: 19901973
7.  Potential Impact of Adding Genetic Markers to Clinical Parameters in Predicting Prostate Biopsy Outcomes in Men Following an Initial Negative Biopsy: Findings from the REDUCE Trial 
European urology  2012;62(6):953-961.
Background
Several germline single nucleotide polymorphisms (SNPs) have been consistently associated with prostate cancer (PCa) risk.
Objective
To determine whether there is an improvement in PCa risk prediction by adding these SNPs to existing predictors of PCa.
Design, setting, and participants
Subjects included men in the placebo arm of the randomized Reduction by Dutasteride of Prostate Cancer Events (REDUCE) trial in whom germline DNA was available. All men had an initial negative prostate biopsy and underwent study-mandated biopsies at 2 yr and 4 yr. Predictive performance of baseline clinical parameters and/or a genetic score based on 33 established PCa risk-associated SNPs was evaluated.
Outcome measurements and statistical analysis
Area under the receiver operating characteristic curves (AUC) were used to compare different models with different predictors. Net reclassification improvement (NRI) and decision curve analysis (DCA) were used to assess changes in risk prediction by adding genetic markers.
Results and limitations
Among 1654 men, genetic score was a significant predictor of positive biopsy, even after adjusting for known clinical variables and family history (p = 3.41 × 10−8). The AUC for the genetic score exceeded that of any other PCa predictor at 0.59. Adding the genetic score to the best clinical model improved the AUC from 0.62 to 0.66 (p < 0.001), reclassified PCa risk in 33% of men (NRI: 0.10; p = 0.002), resulted in higher net benefit from DCA, and decreased the number of biopsies needed to detect the same number of PCa instances. The benefit of adding the genetic score was greatest among men at intermediate risk (25th percentile to 75th percentile). Similar results were found for high-grade (Gleason score ≥7) PCa. A major limitation of this study was its focus on white patients only.
Conclusions
Adding genetic markers to current clinical parameters may improve PCa risk prediction. The improvement is modest but may be helpful for better determining the need for repeat prostate biopsy. The clinical impact of these results requires further study.
doi:10.1016/j.eururo.2012.05.006
PMCID: PMC3568765  PMID: 22652152
Prostate cancer; Genetics; AUC; Detection rate; Reclassification; SNPs; Prospective study; Clinical trial
8.  Three New Genetic Loci (R1210C in CFH, Variants in COL8A1 and RAD51B) Are Independently Related to Progression to Advanced Macular Degeneration 
PLoS ONE  2014;9(1):e87047.
Objectives
To assess the independent impact of new genetic variants on conversion to advanced stages of AMD, controlling for established risk factors, and to determine the contribution of genes in predictive models.
Methods
In this prospective longitudinal study of 2765 individuals, 777 subjects progressed to neovascular disease (NV) or geographic atrophy (GA) in either eye over 12 years. Recently reported genetic loci were assessed for their independent effects on incident advanced AMD after controlling for 6 established loci in 5 genes, and demographic, behavioral, and macular characteristics. New variants which remained significantly related to progression were then added to a final multivariate model to assess their independent effects. The contribution of genes to risk models was assessed using reclassification tables by determining risk within cross-classified quintiles for alternative models.
Results
Three new genetic variants were significantly related to progression: rare variant R1210C in CFH (hazard ratio (HR) 2.5, 95% confidence interval [CI] 1.2–5.3, P = 0.01), and common variants in genes COL8A1 (HR 2.0, 95% CI 1.1–3.5, P = 0.02) and RAD51B (HR 0.8, 95% CI 0.60–0.97, P = 0.03). The area under the curve statistic (AUC) was significantly higher for the 9 gene model (.884) vs the 0 gene model (.873), P = .01. AUC’s for the 9 vs 6 gene models were not significantly different, but reclassification analyses indicated significant added information for more genes, with adjusted odds ratios (OR) for progression within 5 years per one quintile increase in risk score of 2.7, P<0.001 for the 9 vs 6 loci model, and OR 3.5, P<0.001 for the 9 vs. 0 gene model. Similar results were seen for NV and GA.
Conclusions
Rare variant CFH R1210C and common variants in COL8A1 and RAD51B plus six genes in previous models contribute additional predictive information for advanced AMD beyond macular and behavioral phenotypes.
doi:10.1371/journal.pone.0087047
PMCID: PMC3909074  PMID: 24498017
9.  Predictive Value of Updating Framingham Risk Scores with Novel Risk Markers in the U.S. General Population 
PLoS ONE  2014;9(2):e88312.
Background
According to population-based cohort studies CT coronary calcium score (CTCS), carotid intima-media thickness (cIMT), high-sensitivity C- reactive protein (CRP), and ankle-brachial index (ABI) are promising novel risk markers for improving cardiovascular risk assessment. Their impact in the U.S. general population is however uncertain. Our aim was to estimate the predictive value of four novel cardiovascular risk markers for the U.S. general population.
Methods and Findings
Risk profiles, CRP and ABI data of 3,736 asymptomatic subjects aged 40 or older from the National Health and Nutrition Examination Survey (NHANES) 2003–2004 exam were used along with predicted CTCS and cIMT values. For each subject, we calculated 10-year cardiovascular risks with and without each risk marker. Event rates adjusted for competing risks were obtained by microsimulation. We assessed the impact of updated 10-year risk scores by reclassification and C-statistics. In the study population (mean age 56±11 years, 48% male), 70% (80%) were at low (<10%), 19% (14%) at intermediate (≥10–<20%), and 11% (6%) at high (≥20%) 10-year CVD (CHD) risk. Net reclassification improvement was highest after updating 10-year CVD risk with CTCS: 0.10 (95%CI 0.02–0.19). The C-statistic for 10-year CVD risk increased from 0.82 by 0.02 (95%CI 0.01–0.03) with CTCS. Reclassification occurred most often in those at intermediate risk: with CTCS, 36% (38%) moved to low and 22% (30%) to high CVD (CHD) risk. Improvements with other novel risk markers were limited.
Conclusions
Only CTCS appeared to have significant incremental predictive value in the U.S. general population, especially in those at intermediate risk. In future research, cost-effectiveness analyses should be considered for evaluating novel cardiovascular risk assessment strategies.
doi:10.1371/journal.pone.0088312
PMCID: PMC3928195  PMID: 24558385
10.  Are Markers of Inflammation More Strongly Associated with Risk for Fatal Than for Nonfatal Vascular Events? 
PLoS Medicine  2009;6(6):e1000099.
In a secondary analysis of a randomized trial comparing pravastatin versus placebo for the prevention of coronary and cerebral events in an elderly at-risk population, Naveed Sattar and colleagues find that inflammatory markers may be more strongly associated with risk of fatal vascular events than nonfatal vascular events.
Background
Circulating inflammatory markers may more strongly relate to risk of fatal versus nonfatal cardiovascular disease (CVD) events, but robust prospective evidence is lacking. We tested whether interleukin (IL)-6, C-reactive protein (CRP), and fibrinogen more strongly associate with fatal compared to nonfatal myocardial infarction (MI) and stroke.
Methods and Findings
In the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER), baseline inflammatory markers in up to 5,680 men and women aged 70–82 y were related to risk for endpoints; nonfatal CVD (i.e., nonfatal MI and nonfatal stroke [n = 672]), fatal CVD (n = 190), death from other CV causes (n = 38), and non-CVD mortality (n = 300), over 3.2-y follow-up. Elevations in baseline IL-6 levels were significantly (p = 0.0009; competing risks model analysis) more strongly associated with fatal CVD (hazard ratio [HR] for 1 log unit increase in IL-6 1.75, 95% confidence interval [CI] 1.44–2.12) than with risk of nonfatal CVD (1.17, 95% CI 1.04–1.31), in analyses adjusted for treatment allocation. The findings were consistent in a fully adjusted model. These broad trends were similar for CRP and, to a lesser extent, for fibrinogen. The results were also similar in placebo and statin recipients (i.e., no interaction). The C-statistic for fatal CVD using traditional risk factors was significantly (+0.017; p<0.0001) improved by inclusion of IL-6 but not so for nonfatal CVD events (p = 0.20).
Conclusions
In PROSPER, inflammatory markers, in particular IL-6 and CRP, are more strongly associated with risk of fatal vascular events than nonfatal vascular events. These novel observations may have important implications for better understanding aetiology of CVD mortality, and have potential clinical relevance.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Cardiovascular disease (CVD)—disease that affects the heart and/or the blood vessels—is a common cause of death in developed countries. In the USA, for example, the leading cause of death is coronary heart disease (CHD), a CVD in which narrowing of the heart's blood vessels by “atherosclerotic plaques” (fatty deposits that build up with age) slows the blood supply to the heart and may eventually cause a heart attack (myocardial infarction). Other types of CVD include stroke (in which atherosclerotic plaques interrupt the brain's blood supply) and heart failure (a condition in which the heart cannot pump enough blood to the rest of the body). Smoking, high blood pressure, high blood levels of cholesterol (a type of fat), having diabetes, and being overweight all increase a person's risk of developing CVD. Tools such as the “Framingham risk calculator” take these and other risk factors into account to assess an individual's overall risk of CVD, which can be reduced by taking drugs to reduce blood pressure or cholesterol levels (for example, pravastatin) and by making lifestyle changes.
Why Was This Study Done?
Inflammation (an immune response to injury) in the walls of blood vessels is thought to play a role in the development of atherosclerotic plaques. Consistent with this idea, several epidemiological studies (investigations of the causes and distribution of disease in populations) have shown that people with high circulating levels of markers of inflammation such as interleukin-6 (IL-6), C-reactive protein (CRP), and fibrinogen are more likely to have a stroke or a heart attack (a CVD event) than people with low levels of these markers. Although these studies have generally lumped together fatal and nonfatal CVD events, some evidence suggests that circulating inflammatory markers may be more strongly associated with fatal than with nonfatal CVD events. If this is the case, the mechanisms that lead to fatal and nonfatal CVD events may be subtly different and knowing about these differences could improve both the prevention and treatment of CVD. In this study, the researchers investigate this possibility using data collected in the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER; a trial that examined pravastatin's effect on CVD development among 70–82 year olds with pre-existing CVD or an increased risk of CVD because of smoking, high blood pressure, or diabetes).
What Did the Researchers Do and Find?
The researchers used several statistical models to examine the association between baseline levels of IL-6, CRP, and fibrinogen in the trial participants and nonfatal CVD events (nonfatal heart attacks and nonfatal strokes), fatal CVD events, death from other types of CVD, and deaths from other causes during 3.2 years of follow-up. Increased levels of all three inflammatory markers were more strongly associated with fatal CVD than with nonfatal CVD after adjustment for treatment allocation and for other established CVD risk factors but this pattern was strongest for IL-6. Thus, a unit increase in the log of IL-6 levels increased the risk of fatal CVD by half but increased the risk of nonfatal CVD by significantly less. The researchers also investigated whether including these inflammatory markers in tools designed to predict an individual's CVD risk could improve the tool's ability to distinguish between individuals with a high and low risk. The addition of IL-6 to established risk factors, they report, increased this discriminatory ability for fatal CVD but not for nonfatal CVD.
What Do These Findings Mean?
These findings indicate that, at least for the elderly at-risk patients who were included in PROSPER, inflammatory markers are more strongly associated with the risk of a fatal heart attack or stroke than with nonfatal CVD events. These findings need to be confirmed in younger populations and larger studies also need to be done to discover whether the same association holds when fatal heart attacks and fatal strokes are considered separately. Nevertheless, the present findings suggest that inflammation may specifically help to promote the development of serious, potentially fatal CVD and should stimulate improved research into the use of inflammation markers to predict risk of deaths from CVD.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000099.
The MedlinePlus Encyclopedia has pages on coronary heart disease, stroke, and atherosclerosis (in English and Spanish)
MedlinePlus provides links to many other sources of information on heart diseases, vascular diseases, and stroke (in English and Spanish)
Information for patients and caregivers is provided by the American Heart Association on all aspects of cardiovascular disease, including information on inflammation and heart disease
Information is available from the British Heart Foundation on heart disease and keeping the heart healthy
More information about PROSPER is available on the Web site of the Vascular Biochemistry Department of the University of Glasgow
doi:10.1371/journal.pmed.1000099
PMCID: PMC2694359  PMID: 19554082
11.  An assessment of the relationship between clinical utility and predictive ability measures and the impact of mean risk in the population 
Background
Measures of clinical utility (net benefit and event free life years) have been recommended in the assessment of a new predictor in a risk prediction model. However, it is not clear how they relate to the measures of predictive ability and reclassification, such as the c-statistic and Net Reclassification Improvement (NRI), or how these measures are affected by differences in mean risk between populations when a fixed cutpoint to define high risk is assumed.
Methods
We examined the relationship between measures of clinical utility (net benefit, event free life years) and predictive ability (c-statistic, binary c-statistic, continuous NRI(0), NRI with two cutpoints, binary NRI) using simulated data and the Framingham dataset.
Results
In the analysis of simulated data, the addition of a new predictor tended to result in more people being treated when the mean risk was less than the cutpoint, and fewer people being treated for mean risks beyond the cutpoint. The reclassification and clinical utility measures showed similar relationships with mean risk when the mean risk was less than the cutpoint and the baseline model was not strong. However, when the mean risk was greater than the cutpoint, or the baseline model was strong, the reclassification and clinical utility measures diverged in their relationship with mean risk.
Although the risk of CVD was lower for women compared to men in the Framingham dataset, the measures of predictive ability, reclassification and clinical utility were both larger for women. The difference in these results was, in part, due to the larger hazard ratio associated with the additional risk predictor (systolic blood pressure) for women.
Conclusion
Measures such as the c-statistic and the measures of reclassification do not capture the consequences of implementing different prediction models. We do not recommend their use in evaluating which new predictors may be clinically useful in a particular population. We recommend that a measure such as net benefit or EFLY is calculated and, where appropriate, the measure is weighted to account for differences in the distribution of risks between the study population and the population in which the new predictors will be implemented.
doi:10.1186/1471-2288-14-86
PMCID: PMC4105158  PMID: 24989719
Biomarkers; Net reclassification improvement (NRI); Area under curve (AUC); Net benefit; Event free life years (EFLY); Risk assessment; Prediction
12.  Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review 
Epidemiology (Cambridge, Mass.)  2014;25(1):114-121.
Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in net benefit.
doi:10.1097/EDE.0000000000000018
PMCID: PMC3918180  PMID: 24240655
13.  Critical appraisal of CRP measurement for the prediction of coronary heart disease events: new data and systematic review of 31 prospective cohorts 
Background Non-uniform reporting of relevant relationships and metrics hampers critical appraisal of the clinical utility of C-reactive protein (CRP) measurement for prediction of later coronary events.
Methods We evaluated the predictive performance of CRP in the Northwick Park Heart Study (NPHS-II) and the Edinburgh Artery Study (EAS) comparing discrimination by area under the ROC curve (AUC), calibration and reclassification. We set the findings in the context of a systematic review of published studies comparing different available and imputed measures of prediction. Risk estimates per-quantile of CRP were pooled using a random effects model to infer the shape of the CRP-coronary event relationship.
Results NPHS-II and EAS (3441 individuals, 309 coronary events): CRP alone provided modest discrimination for coronary heart disease (AUC 0.61 and 0.62 in NPHS-II and EAS, respectively) and only modest improvement in the discrimination of a Framingham-based risk score (FRS) (increment in AUC 0.04 and –0.01, respectively). Risk models based on FRS alone and FRS + CRP were both well calibrated and the net reclassification improvement (NRI) was 8.5% in NPHS-II and 8.8% in EAS with four risk categories, falling to 4.9% and 3.0% for 10-year coronary disease risk threshold of 15%. Systematic review (31 prospective studies 84 063 individuals, 11 252 coronary events): pooled inferred values for the AUC for CRP alone were 0.59 (0.57, 0.61), 0.59 (0.57, 0.61) and 0.57 (0.54, 0.61) for studies of <5, 5–10 and >10 years follow up, respectively. Evidence from 13 studies (7201 cases) indicated that CRP did not consistently improve performance of the Framingham risk score when assessed by discrimination, with AUC increments in the range 0–0.15. Evidence from six studies (2430 cases) showed that CRP provided statistically significant but quantitatively small improvement in calibration of models based on established risk factors in some but not all studies. The wide overlap of CRP values among people who later suffered events and those who did not appeared to be explained by the consistently log-normal distribution of CRP and a graded continuous increment in coronary risk across the whole range of values without a threshold, such that a large proportion of events occurred among the many individuals with near average levels of CRP.
Conclusions CRP does not perform better than the Framingham risk equation for discrimination. The improvement in risk stratification or reclassification from addition of CRP to models based on established risk factors is small and inconsistent. Guidance on the clinical use of CRP measurement in the prediction of coronary events may require updating in light of this large comparative analysis.
doi:10.1093/ije/dyn217
PMCID: PMC2639366  PMID: 18930961
C-reactive protein; prediction; coronary heart disease; primary prevention; risk stratification
14.  Repeat Bone Mineral Density Screening and Prediction of Hip and Major Osteoporotic Fracture 
IMPORTANCE
Screening for osteoporosis with bone mineral density (BMD) is recommended for older adults. It is unclear whether repeating a BMD screening test improves fracture risk assessment.
OBJECTIVES
To determine whether changes in BMD after 4 years provide additional information on fracture risk beyond baseline BMD and to quantify the change in fracture risk classification after a second BMD measure.
DESIGN, SETTING, AND PARTICIPANTS
Population-based cohort study involving 310 men and 492 women from the Framingham Osteoporosis Study with 2 measures of femoral neck BMD taken from 1987 through 1999.
MAIN OUTCOMES AND MEASURES
Risk of hip or major osteoporotic fracture through 2009 or 12 years following the second BMD measure.
RESULTS
Mean age was 74.8 years. The mean (SD) BMD change was −0.6% per year (1.8%). Throughout a median follow-up of 9.6 years, 76 participants experienced an incident hip fracture and 113 participants experienced a major osteoporotic fracture. Annual percent BMD change per SD decrease was associated with risk of hip fracture (hazard ratio [HR], 1.43 [95% CI, 1.16 to 1.78]) and major osteoporotic fracture (HR, 1.21 [95% CI, 1.01 to 1.45]) after adjusting for baseline BMD. At 10 years’ follow-up, 1 SD decrease in annual percent BMD change compared with the mean BMD change was associated with 3.9 excess hip fractures per 100 persons. In receiver operating characteristic (ROC) curve analyses, the addition of BMD change to a model with baseline BMD did not meaningfully improve performance. The area under the curve (AUC) was 0.71 (95% CI, 0.65 to 0.78) for the baseline BMD model compared with 0.68 (95% CI, 0.62 to 0.75) for the BMD percent change model. Moreover, the addition of BMD change to a model with baseline BMD did not meaningfully improve performance (AUC, 0.72 [95% CI, 0.66 to 0.79]). Using the net reclassification index, a second BMD measure increased the proportion of participants reclassified as high risk of hip fracture by 3.9% (95% CI, −2.2% to 9.9%), whereas it decreased the proportion classified as low risk by −2.2% (95% CI, −4.5% to 0.1%).
CONCLUSIONS AND RELEVANCE
In untreated men and women of mean age 75 years, a second BMD measure after 4 years did not meaningfully improve the prediction of hip or major osteoporotic fracture. Repeating a BMD measure within 4 years to improve fracture risk stratification may not be necessary in adults this age untreated for osteoporosis.
doi:10.1001/jama.2013.277817
PMCID: PMC3903386  PMID: 24065012
15.  Utility of genetic and non-genetic risk factors in prediction of type 2 diabetes: Whitehall II prospective cohort study 
Objectives To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes.
Design Workplace based prospective cohort study with three 5 yearly medical screenings.
Participants 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years.
Outcome measures Non-genetic variables included in two established risk models—the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)—and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor’s diagnosis, or the use of anti-diabetic drugs.
Results A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score.
Conclusion The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.
doi:10.1136/bmj.b4838
PMCID: PMC2806945  PMID: 20075150
16.  Genomic Predictors for Recurrence Patterns of Hepatocellular Carcinoma: Model Derivation and Validation 
PLoS Medicine  2014;11(12):e1001770.
In this study, Lee and colleagues develop a genomic predictor that can identify patients at high risk for late recurrence of hepatocellular carcinoma (HCC) and provided new biomarkers for risk stratification.
Background
Typically observed at 2 y after surgical resection, late recurrence is a major challenge in the management of hepatocellular carcinoma (HCC). We aimed to develop a genomic predictor that can identify patients at high risk for late recurrence and assess its clinical implications.
Methods and Findings
Systematic analysis of gene expression data from human liver undergoing hepatic injury and regeneration revealed a 233-gene signature that was significantly associated with late recurrence of HCC. Using this signature, we developed a prognostic predictor that can identify patients at high risk of late recurrence, and tested and validated the robustness of the predictor in patients (n = 396) who underwent surgery between 1990 and 2011 at four centers (210 recurrences during a median of 3.7 y of follow-up). In multivariate analysis, this signature was the strongest risk factor for late recurrence (hazard ratio, 2.2; 95% confidence interval, 1.3–3.7; p = 0.002). In contrast, our previously developed tumor-derived 65-gene risk score was significantly associated with early recurrence (p = 0.005) but not with late recurrence (p = 0.7). In multivariate analysis, the 65-gene risk score was the strongest risk factor for very early recurrence (<1 y after surgical resection) (hazard ratio, 1.7; 95% confidence interval, 1.1–2.6; p = 0.01). The potential significance of STAT3 activation in late recurrence was predicted by gene network analysis and validated later. We also developed and validated 4- and 20-gene predictors from the full 233-gene predictor. The main limitation of the study is that most of the patients in our study were hepatitis B virus–positive. Further investigations are needed to test our prediction models in patients with different etiologies of HCC, such as hepatitis C virus.
Conclusions
Two independently developed predictors reflected well the differences between early and late recurrence of HCC at the molecular level and provided new biomarkers for risk stratification.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Primary liver cancer—a tumor that starts when a liver cell acquires genetic changes that allow it to grow uncontrollably—is the second-leading cause of cancer-related deaths worldwide, killing more than 600,000 people annually. If hepatocellular cancer (HCC; the most common type of liver cancer) is diagnosed in its early stages, it can be treated by surgically removing part of the liver (resection), by liver transplantation, or by local ablation, which uses an electric current to destroy the cancer cells. Unfortunately, the symptoms of HCC, which include weight loss, tiredness, and jaundice (yellowing of the skin and eyes), are vague and rarely appear until the cancer has spread throughout the liver. Consequently, HCC is rarely diagnosed before the cancer is advanced and untreatable, and has a poor prognosis (likely outcome)—fewer than 5% of patients survive for five or more years after diagnosis. The exact cause of HCC is unclear, but chronic liver (hepatic) injury and inflammation (caused, for example, by infection with hepatitis B virus [HBV] or by alcohol abuse) promote tumor development.
Why Was This Study Done?
Even when it is diagnosed early, HCC has a poor prognosis because it often recurs. Patients treated for HCC can experience two distinct types of tumor recurrence. Early recurrence, which usually happens within the first two years after surgery, arises from the spread of primary cancer cells into the surrounding liver that left behind during surgery. Late recurrence, which typically happens more than two years after surgery, involves the development of completely new tumors and seems to be the result of chronic liver damage. Because early and late recurrence have different clinical courses, it would be useful to be able to predict which patients are at high risk of which type of recurrence. Given that injury, inflammation, and regeneration seem to prime the liver for HCC development, might the gene expression patterns associated with these conditions serve as predictive markers for the identification of patients at risk of late recurrence of HCC? Here, the researchers develop a genomic predictor for the late recurrence of HCC by examining gene expression patterns in tissue samples from livers that were undergoing injury and regeneration.
What Did the Researchers Do and Find?
By comparing gene expression data obtained from liver biopsies taken before and after liver transplantation or resection and recorded in the US National Center for Biotechnology Information Gene Expression Omnibus database, the researchers identified 233 genes whose expression in liver differed before and after liver injury (the hepatic injury and regeneration, or HIR, signature). Statistical analyses indicate that the expression of the HIR signature in archived tissue samples was significantly associated with late recurrence of HCC in three independent groups of patients, but not with early recurrence (a significant association between two variables is one that is unlikely to have arisen by chance). By contrast, a tumor-derived 65-gene signature previously developed by the researchers was significantly associated with early recurrence but not with late recurrence. Notably, as few as four genes from the HIR signature were sufficient to construct a reliable predictor for late recurrence of HCC. Finally, the researchers report that many of the genes in the HIR signature encode proteins involved in inflammation and cell death, but that others encode proteins involved in cellular growth and proliferation such as STAT3, a protein with a well-known role in liver regeneration.
What Do These Findings Mean?
These findings identify a gene expression signature that was significantly associated with late recurrence of HCC in three independent groups of patients. Because most of these patients were infected with HBV, the ability of the HIR signature to predict late occurrence of HCC may be limited to HBV-related HCC and may not be generalizable to HCC related to other causes. Moreover, the predictive ability of the HIR signature needs to be tested in a prospective study in which samples are taken and analyzed at baseline and patients are followed to see whether their HCC recurs; the current retrospective study analyzed stored tissue samples. Importantly, however, the HIR signature associated with late recurrence and the 65-gene signature associated with early recurrence provide new insights into the biological differences between late and early recurrence of HCC at the molecular level. Knowing about these differences may lead to new treatments for HCC and may help clinicians choose the most appropriate treatments for their patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001770.
The US National Cancer Institute provides information about all aspects of cancer, including detailed information for patients and professionals about primary liver cancer (in English and Spanish)
The American Cancer Society also provides information about liver cancer (including information on support programs and services; available in several languages)
The UK National Health Service Choices website provides information about primary liver cancer (including a video about coping with cancer)
Cancer Research UK (a not-for-profit organization) also provides detailed information about primary liver cancer (including information about living with primary liver cancer)
MD Anderson Cancer Center provides information about symptoms, diagnosis, treatment, and prevention of primary liver cancer
MedlinePlus provides links to further resources about liver cancer (in English and Spanish)
doi:10.1371/journal.pmed.1001770
PMCID: PMC4275163  PMID: 25536056
17.  Biomarker Profiling by Nuclear Magnetic Resonance Spectroscopy for the Prediction of All-Cause Mortality: An Observational Study of 17,345 Persons 
PLoS Medicine  2014;11(2):e1001606.
In this study, Würtz and colleagues conducted high-throughput profiling of blood specimens in two large population-based cohorts in order to identify biomarkers for all-cause mortality and enhance risk prediction. The authors found that biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. However, further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers to guide screening and prevention.
Please see later in the article for the Editors' Summary
Background
Early identification of ambulatory persons at high short-term risk of death could benefit targeted prevention. To identify biomarkers for all-cause mortality and enhance risk prediction, we conducted high-throughput profiling of blood specimens in two large population-based cohorts.
Methods and Findings
106 candidate biomarkers were quantified by nuclear magnetic resonance spectroscopy of non-fasting plasma samples from a random subset of the Estonian Biobank (n = 9,842; age range 18–103 y; 508 deaths during a median of 5.4 y of follow-up). Biomarkers for all-cause mortality were examined using stepwise proportional hazards models. Significant biomarkers were validated and incremental predictive utility assessed in a population-based cohort from Finland (n = 7,503; 176 deaths during 5 y of follow-up). Four circulating biomarkers predicted the risk of all-cause mortality among participants from the Estonian Biobank after adjusting for conventional risk factors: alpha-1-acid glycoprotein (hazard ratio [HR] 1.67 per 1–standard deviation increment, 95% CI 1.53–1.82, p = 5×10−31), albumin (HR 0.70, 95% CI 0.65–0.76, p = 2×10−18), very-low-density lipoprotein particle size (HR 0.69, 95% CI 0.62–0.77, p = 3×10−12), and citrate (HR 1.33, 95% CI 1.21–1.45, p = 5×10−10). All four biomarkers were predictive of cardiovascular mortality, as well as death from cancer and other nonvascular diseases. One in five participants in the Estonian Biobank cohort with a biomarker summary score within the highest percentile died during the first year of follow-up, indicating prominent systemic reflections of frailty. The biomarker associations all replicated in the Finnish validation cohort. Including the four biomarkers in a risk prediction score improved risk assessment for 5-y mortality (increase in C-statistics 0.031, p = 0.01; continuous reclassification improvement 26.3%, p = 0.001).
Conclusions
Biomarker associations with cardiovascular, nonvascular, and cancer mortality suggest novel systemic connectivities across seemingly disparate morbidities. The biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. Further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers for guiding screening and prevention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A biomarker is a biological molecule found in blood, body fluids, or tissues that may signal an abnormal process, a condition, or a disease. The level of a particular biomarker may indicate a patient's risk of disease, or likely response to a treatment. For example, cholesterol levels are measured to assess the risk of heart disease. Most current biomarkers are used to test an individual's risk of developing a specific condition. There are none that accurately assess whether a person is at risk of ill health generally, or likely to die soon from a disease. Early and accurate identification of people who appear healthy but in fact have an underlying serious illness would provide valuable opportunities for preventative treatment.
While most tests measure the levels of a specific biomarker, there are some technologies that allow blood samples to be screened for a wide range of biomarkers. These include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry. These tools have the potential to be used to screen the general population for a range of different biomarkers.
Why Was This Study Done?
Identifying new biomarkers that provide insight into the risk of death from all causes could be an important step in linking different diseases and assessing patient risk. The authors in this study screened patient samples using NMR spectroscopy for biomarkers that accurately predict the risk of death particularly amongst the general population, rather than amongst people already known to be ill.
What Did the Researchers Do and Find?
The researchers studied two large groups of people, one in Estonia and one in Finland. Both countries have set up health registries that collect and store blood samples and health records over many years. The registries include large numbers of people who are representative of the wider population.
The researchers first tested blood samples from a representative subset of the Estonian group, testing 9,842 samples in total. They looked at 106 different biomarkers in each sample using NMR spectroscopy. They also looked at the health records of this group and found that 508 people died during the follow-up period after the blood sample was taken, the majority from heart disease, cancer, and other diseases. Using statistical analysis, they looked for any links between the levels of different biomarkers in the blood and people's short-term risk of dying. They found that the levels of four biomarkers—plasma albumin, alpha-1-acid glycoprotein, very-low-density lipoprotein (VLDL) particle size, and citrate—appeared to accurately predict short-term risk of death. They repeated this study with the Finnish group, this time with 7,503 individuals (176 of whom died during the five-year follow-up period after giving a blood sample) and found similar results.
The researchers carried out further statistical analyses to take into account other known factors that might have contributed to the risk of life-threatening illness. These included factors such as age, weight, tobacco and alcohol use, cholesterol levels, and pre-existing illness, such as diabetes and cancer. The association between the four biomarkers and short-term risk of death remained the same even when controlling for these other factors.
The analysis also showed that combining the test results for all four biomarkers, to produce a biomarker score, provided a more accurate measure of risk than any of the biomarkers individually. This biomarker score also proved to be the strongest predictor of short-term risk of dying in the Estonian group. Individuals with a biomarker score in the top 20% had a risk of dying within five years that was 19 times greater than that of individuals with a score in the bottom 20% (288 versus 15 deaths).
What Do These Findings Mean?
This study suggests that there are four biomarkers in the blood—alpha-1-acid glycoprotein, albumin, VLDL particle size, and citrate—that can be measured by NMR spectroscopy to assess whether otherwise healthy people are at short-term risk of dying from heart disease, cancer, and other illnesses. However, further validation of these findings is still required, and additional studies should examine the biomarker specificity and associations in settings closer to clinical practice. The combined biomarker score appears to be a more accurate predictor of risk than tests for more commonly known risk factors. Identifying individuals who are at high risk using these biomarkers might help to target preventative medical treatments to those with the greatest need.
However, there are several limitations to this study. As an observational study, it provides evidence of only a correlation between a biomarker score and ill health. It does not identify any underlying causes. Other factors, not detectable by NMR spectroscopy, might be the true cause of serious health problems and would provide a more accurate assessment of risk. Nor does this study identify what kinds of treatment might prove successful in reducing the risks. Therefore, more research is needed to determine whether testing for these biomarkers would provide any clinical benefit.
There were also some technical limitations to the study. NMR spectroscopy does not detect as many biomarkers as mass spectrometry, which might therefore identify further biomarkers for a more accurate risk assessment. In addition, because both study groups were northern European, it is not yet known whether the results would be the same in other ethnic groups or populations with different lifestyles.
In spite of these limitations, the fact that the same four biomarkers are associated with a short-term risk of death from a variety of diseases does suggest that similar underlying mechanisms are taking place. This observation points to some potentially valuable areas of research to understand precisely what's contributing to the increased risk.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001606
The US National Institute of Environmental Health Sciences has information on biomarkers
The US Food and Drug Administration has a Biomarker Qualification Program to help researchers in identifying and evaluating new biomarkers
Further information on the Estonian Biobank is available
The Computational Medicine Research Team of the University of Oulu and the University of Bristol have a webpage that provides further information on high-throughput biomarker profiling by NMR spectroscopy
doi:10.1371/journal.pmed.1001606
PMCID: PMC3934819  PMID: 24586121
18.  Ankle Brachial Index Combined with Framingham Risk Score to Predict Cardiovascular Events and Mortality: A Meta-analysis 
Context
Prediction models to identify healthy individuals at high risk of cardiovascular disease have limited accuracy. A low ankle brachial index is an indicator of atherosclerosis and has the potential to improve prediction.
Objective
To determine if the ankle brachial index provides information on the risk of cardiovascular events and mortality independently of the Framingham Risk Score and can improve risk prediction.
Data Sources
Relevant studies were identified by collaborators. A search of MEDLINE (1950 to February 2008) and EMBASE (1980 to February 2008), was conducted using common text words for the term ‘ABI’ combined with text words and Medical Subject Headings to capture prospective cohort designs. Review of reference lists and conference proceedings, and correspondence with experts was conducted to identify additional published and unpublished studies.
Study Selection
Studies were included if (1) participants were derived from a general population (2) ankle brachial index was measured at baseline and (3) subjects were followed up to detect total and cardiovascular mortality.
Data Extraction
Pre-specified data on subjects in each selected study were extracted into a combined dataset and an individual participant data meta-analysis conducted on subjects who had no previous history of coronary heart disease.
Results
Sixteen population cohort studies fulfilling the inclusion criteria were included. During 480,325 person years of follow up of 24,955 men and 23,339 women, the risk of death by ankle brachial index had a reverse J shaped distribution with a normal (low risk) ankle brachial index of 1.11 to 1.40. The 10-year cardiovascular mortality (95% CI) in men with a low ankle brachial index (≤ 0.90) was 18.7% (13.3% to 24.1%) and with normal ankle brachial index (1.11 to 1.40) was 4.4% (3.2% to 5.7%), hazard ratio (95% CI) 4.2 (3.5 to 5.4). Corresponding mortalities in women were 12.6% (6.2% to 19.0%) and 4.1% (2.2% to 6.1%), hazard ratio 3.5 (2.4 to 5.1). The hazard ratios remained elevated on adjusting for Framingham Risk Score, 2.9 (2.3 to 3.7) for men and 3.0 (2.0 to 4.4) for women. A low ankle brachial index (≤0.90) was associated with approximately twice the 10-year total mortality, cardiovascular mortality and major coronary event rate compared with the overall rate in each Framingham category. Inclusion of the ankle brachial index in cardiovascular risk stratification using the Framingham Risk Score would result in reclassification of the risk category and modification of treatment recommendations in approximately 19% of men and 36% of women.
Conclusion
Measurement of the ankle brachial index may improve the accuracy of cardiovascular risk prediction beyond the Framingham Risk Score. Development and validation of a new risk equation incorporating the ankle brachial index is warranted.
doi:10.1001/jama.300.2.197
PMCID: PMC2932628  PMID: 18612117
19.  Self-perceived quality of life predicts mortality risk better than a multi-biomarker panel, but the combination of both does best 
Background
Associations between measures of subjective health and mortality risk have previously been shown. We assessed the impact and comparative predictive performance of a multi-biomarker panel on this association.
Methods
Data from 4,261 individuals aged 20-79 years recruited for the population-based Study of Health in Pomerania was used. During an average 9.7 year follow-up, 456 deaths (10.7%) occurred. Subjective health was assessed by SF-12 derived physical (PCS-12) and mental component summaries (MCS-12), and a single-item self-rated health (SRH) question. We implemented Cox proportional-hazards regression models to investigate the association of subjective health with mortality and to assess the impact of a combination of 10 biomarkers on this association. Variable selection procedures were used to identify a parsimonious set of subjective health measures and biomarkers, whose predictive ability was compared using receiver operating characteristic (ROC) curves, C-statistics, and reclassification methods.
Results
In age- and gender-adjusted Cox models, poor SRH (hazard ratio (HR), 2.07; 95% CI, 1.34-3.20) and low PCS-12 scores (lowest vs. highest quartile: HR, 1.75; 95% CI, 1.31-2.33) were significantly associated with increased risk of all-cause mortality; an association independent of various covariates and biomarkers. Furthermore, selected subjective health measures yielded a significantly higher C-statistic (0.883) compared to the selected biomarker panel (0.872), whereas a combined assessment showed the highest C-statistic (0.887) with a highly significant integrated discrimination improvement of 1.5% (p < 0.01).
Conclusion
Adding biomarker information did not affect the association of subjective health measures with mortality, but significantly improved risk stratification. Thus, a combined assessment of self-reported subjective health and measured biomarkers may be useful to identify high-risk individuals for intensified monitoring.
doi:10.1186/1471-2288-11-103
PMCID: PMC3152941  PMID: 21749697
Health-related quality of life; multiple biomarker panel; all-cause mortality; SF-12; population-based cohort
20.  Comparison of Novel Risk Markers for Improvement in Cardiovascular Risk Assessment in Intermediate Risk Individuals. The Multi-Ethnic Study of Atherosclerosis 
Context
Risk markers including coronary artery calcium (CAC), carotid intima-media thickness (CIMT), ankle-brachial Index (ABI), brachial flow-mediated dilation (FMD), high sensitivity C -reactive protein (hs-CRP) and family history (FH) of coronary heart disease (CHD) have been reported to improve on the Framingham risk score (FRS) for prediction of CHD. However, there are no direct comparisons of these markers for risk prediction in a single cohort.
Objective
We compared improvement in prediction of incident CHD/cardiovascular disease (CVD) of these 6 risk markers within intermediate risk participants (5 % < FRS < 20%) in the Multi-Ethnic Study of Atherosclerosis (MESA).
Design, Setting and Participants
Of 6814 MESA participants from 6 US field centers, 1330 were intermediate risk, without diabetes mellitus, and had complete data on all 6 markers. Recruitment spanned July 2000 to September 2002; follow-up extended through May 2011. Probability- weighted Cox proportional hazard models were used to estimate hazard ratios (HR). Area under the receiver operator characteristic curve (AUC) and net reclassification improvement (NRI) were used to compare incremental contributions of each marker when added to the FRS + race/ethnicity.
Main Outcome Measures
Incident CHD defined as MI, angina followed by revascularization, resuscitated cardiac arrest or CHD death. Incident CVD additionally included stroke or CVD death.
Results
After median follow-up of 7.6 years (IQR 7.3 – 7.8 years), 94 CHD and 123 CVD events occurred. CAC, ABI, hs-CRP and FH were independently associated with incident CHD in multivariable analyses [HR (95%CI: 2.60(1.94-3.50), 0.79(0.66-0.95), 1.28(1.00-1.64) and 2.18(1.38-3.42) respectively]. CIMT and FMD were not associated with incident CHD in multivariable analyses [HR (95%CI) 1.17(0.95- 1.45) and 0.95(0.78 −1.14) respectively]. Although the addition of the markers individually to the FRS +race/ethnicity improved the AUC, CAC afforded the highest increment (0.623 vs. 0.784) while FMD afforded the least [0.623 vs. 0.639]. For incident CHD, the NRI with CAC was 0.659, FMD 0.024, ABI 0.036, CIMT 0.102, FH 0.160 and hs-CRP 0.079. Similar results were obtained for incident CVD.
Conclusion
CAC, ABI, hs-CRP and FH are independent predictors of incident CHD/CVD in intermediate risk individuals. CAC provides superior discrimination and risk reclassification compared with other risk markers.
doi:10.1001/jama.2012.9624
PMCID: PMC4141475  PMID: 22910756
21.  Gene-Lifestyle Interaction and Type 2 Diabetes: The EPIC InterAct Case-Cohort Study 
PLoS Medicine  2014;11(5):e1001647.
In this study, Wareham and colleagues quantified the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention. The authors found that the relative effect of a type 2 diabetes genetic risk score is greater in younger and leaner participants, and the high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Please see later in the article for the Editors' Summary
Background
Understanding of the genetic basis of type 2 diabetes (T2D) has progressed rapidly, but the interactions between common genetic variants and lifestyle risk factors have not been systematically investigated in studies with adequate statistical power. Therefore, we aimed to quantify the combined effects of genetic and lifestyle factors on risk of T2D in order to inform strategies for prevention.
Methods and Findings
The InterAct study includes 12,403 incident T2D cases and a representative sub-cohort of 16,154 individuals from a cohort of 340,234 European participants with 3.99 million person-years of follow-up. We studied the combined effects of an additive genetic T2D risk score and modifiable and non-modifiable risk factors using Prentice-weighted Cox regression and random effects meta-analysis methods. The effect of the genetic score was significantly greater in younger individuals (p for interaction  = 1.20×10−4). Relative genetic risk (per standard deviation [4.4 risk alleles]) was also larger in participants who were leaner, both in terms of body mass index (p for interaction  = 1.50×10−3) and waist circumference (p for interaction  = 7.49×10−9). Examination of absolute risks by strata showed the importance of obesity for T2D risk. The 10-y cumulative incidence of T2D rose from 0.25% to 0.89% across extreme quartiles of the genetic score in normal weight individuals, compared to 4.22% to 7.99% in obese individuals. We detected no significant interactions between the genetic score and sex, diabetes family history, physical activity, or dietary habits assessed by a Mediterranean diet score.
Conclusions
The relative effect of a T2D genetic risk score is greater in younger and leaner participants. However, this sub-group is at low absolute risk and would not be a logical target for preventive interventions. The high absolute risk associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, more than 380 million people currently have diabetes, and the condition is becoming increasingly common. Diabetes is characterized by high levels of glucose (sugar) in the blood. Blood sugar levels are usually controlled by insulin, a hormone released by the pancreas after meals (digestion of food produces glucose). In people with type 2 diabetes (the commonest type of diabetes), blood sugar control fails because the fat and muscle cells that normally respond to insulin by removing excess sugar from the blood become less responsive to insulin. Type 2 diabetes can often initially be controlled with diet and exercise (lifestyle changes) and with antidiabetic drugs such as metformin and sulfonylureas, but patients may eventually need insulin injections to control their blood sugar levels. Long-term complications of diabetes, which include an increased risk of heart disease and stroke, reduce the life expectancy of people with diabetes by about ten years compared to people without diabetes.
Why Was This Study Done?
Type 2 diabetes is thought to originate from the interplay between genetic and lifestyle factors. But although rapid progress is being made in understanding the genetic basis of type 2 diabetes, it is not known whether the consequences of adverse lifestyles (for example, being overweight and/or physically inactive) differ according to an individual's underlying genetic risk of diabetes. It is important to investigate this question to inform strategies for prevention. If, for example, obese individuals with a high level of genetic risk have a higher risk of developing diabetes than obese individuals with a low level of genetic risk, then preventative strategies that target lifestyle interventions to obese individuals with a high genetic risk would be more effective than strategies that target all obese individuals. In this case-cohort study, researchers from the InterAct consortium quantify the combined effects of genetic and lifestyle factors on the risk of type 2 diabetes. A case-cohort study measures exposure to potential risk factors in a group (cohort) of people and compares the occurrence of these risk factors in people who later develop the disease with those who remain disease free.
What Did the Researchers Do and Find?
The InterAct study involves 12,403 middle-aged individuals who developed type 2 diabetes after enrollment (incident cases) into the European Prospective Investigation into Cancer and Nutrition (EPIC) and a sub-cohort of 16,154 EPIC participants. The researchers calculated a genetic type 2 diabetes risk score for most of these individuals by determining which of 49 gene variants associated with type 2 diabetes each person carried, and collected baseline information about exposure to lifestyle risk factors for type 2 diabetes. They then used various statistical approaches to examine the combined effects of the genetic risk score and lifestyle factors on diabetes development. The effect of the genetic score was greater in younger individuals than in older individuals and greater in leaner participants than in participants with larger amounts of body fat. The absolute risk of type 2 diabetes, expressed as the ten-year cumulative incidence of type 2 diabetes (the percentage of participants who developed diabetes over a ten-year period) increased with increasing genetic score in normal weight individuals from 0.25% in people with the lowest genetic risk scores to 0.89% in those with the highest scores; in obese people, the ten-year cumulative incidence rose from 4.22% to 7.99% with increasing genetic risk score.
What Do These Findings Mean?
These findings show that in this middle-aged cohort, the relative association with type 2 diabetes of a genetic risk score comprised of a large number of gene variants is greatest in individuals who are younger and leaner at baseline. This finding may in part reflect the methods used to originally identify gene variants associated with type 2 diabetes, and future investigations that include other genetic variants, other lifestyle factors, and individuals living in other settings should be undertaken to confirm this finding. Importantly, however, this study shows that young, lean individuals with a high genetic risk score have a low absolute risk of developing type 2 diabetes. Thus, this sub-group of individuals is not a logical target for preventative interventions. Rather, suggest the researchers, the high absolute risk of type 2 diabetes associated with obesity at any level of genetic risk highlights the importance of universal rather than targeted approaches to lifestyle intervention.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001647.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health-care professionals and the general public, including detailed information on diabetes prevention (in English and Spanish)
The UK National Health Service Choices website provides information for patients and carers about type 2 diabetes and about living with diabetes; it also provides people's stories about diabetes
The charity Diabetes UK provides detailed information for patients and carers in several languages, including information on healthy lifestyles for people with diabetes
The UK-based non-profit organization Healthtalkonline has interviews with people about their experiences of diabetes
The Genetic Landscape of Diabetes is published by the US National Center for Biotechnology Information
More information on the InterAct study is available
MedlinePlus provides links to further resources and advice about diabetes and diabetes prevention (in English and Spanish)
doi:10.1371/journal.pmed.1001647
PMCID: PMC4028183  PMID: 24845081
22.  Circulating Mitochondrial DNA in Patients in the ICU as a Marker of Mortality: Derivation and Validation 
PLoS Medicine  2013;10(12):e1001577.
In this paper, Choi and colleagues analyzed levels of mitochondrial DNA in two prospective observational cohort studies and found that increased mtDNA levels are associated with ICU mortality, and improve risk prediction in medical ICU patients. The data suggests that mtDNA could serve as a viable plasma biomarker in MICU patients.
Background
Mitochondrial DNA (mtDNA) is a critical activator of inflammation and the innate immune system. However, mtDNA level has not been tested for its role as a biomarker in the intensive care unit (ICU). We hypothesized that circulating cell-free mtDNA levels would be associated with mortality and improve risk prediction in ICU patients.
Methods and Findings
Analyses of mtDNA levels were performed on blood samples obtained from two prospective observational cohort studies of ICU patients (the Brigham and Women's Hospital Registry of Critical Illness [BWH RoCI, n = 200] and Molecular Epidemiology of Acute Respiratory Distress Syndrome [ME ARDS, n = 243]). mtDNA levels in plasma were assessed by measuring the copy number of the NADH dehydrogenase 1 gene using quantitative real-time PCR. Medical ICU patients with an elevated mtDNA level (≥3,200 copies/µl plasma) had increased odds of dying within 28 d of ICU admission in both the BWH RoCI (odds ratio [OR] 7.5, 95% CI 3.6–15.8, p = 1×10−7) and ME ARDS (OR 8.4, 95% CI 2.9–24.2, p = 9×10−5) cohorts, while no evidence for association was noted in non-medical ICU patients. The addition of an elevated mtDNA level improved the net reclassification index (NRI) of 28-d mortality among medical ICU patients when added to clinical models in both the BWH RoCI (NRI 79%, standard error 14%, p<1×10−4) and ME ARDS (NRI 55%, standard error 20%, p = 0.007) cohorts. In the BWH RoCI cohort, those with an elevated mtDNA level had an increased risk of death, even in analyses limited to patients with sepsis or acute respiratory distress syndrome. Study limitations include the lack of data elucidating the concise pathological roles of mtDNA in the patients, and the limited numbers of measurements for some of biomarkers.
Conclusions
Increased mtDNA levels are associated with ICU mortality, and inclusion of mtDNA level improves risk prediction in medical ICU patients. Our data suggest that mtDNA could serve as a viable plasma biomarker in medical ICU patients.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Intensive care units (ICUs, also known as critical care units) are specialist hospital wards that provide care for people with life-threatening injuries and illnesses. In the US alone, more than 5 million people are admitted to ICUs every year. Different types of ICUs treat different types of problems. Medical ICUs treat patients who, for example, have been poisoned or who have a serious infection such as sepsis (blood poisoning) or severe pneumonia (inflammation of the lungs); trauma ICUs treat patients who have sustained a major injury; cardiac ICUs treat patients who have heart problems; and surgical ICUs treat complications arising from operations. Patients admitted to ICUs require constant medical attention and support from a team of specially trained nurses and physicians to prevent organ injury and to keep their bodies functioning. Monitors, intravenous tubes (to supply essential fluids, nutrients, and drugs), breathing machines, catheters (to drain urine), and other equipment also help to keep ICU patients alive.
Why Was This Study Done?
Although many patients admitted to ICUs recover, others do not. ICU specialists use scoring systems (algorithms) based on clinical signs and physiological measurements to predict their patients' likely outcomes. For example, the APACHE II scoring system uses information on heart and breathing rates, temperature, levels of salts in the blood, and other signs and physiological measurements collected during the first 24 hours in the ICU to predict the patient's risk of death. Existing scoring systems are not perfect, however, and “biomarkers” (molecules in bodily fluids that provide information about a disease state) are needed to improve risk prediction for ICU patients. Here, the researchers investigate whether levels of circulating cell-free mitochondrial DNA (mtDNA) are associated with ICU deaths and whether these levels can be used as a biomarker to improve risk prediction in ICU patients. Mitochondria are cellular structures that produce energy. Levels of mtDNA in the plasma (the liquid part of blood) increase in response to trauma and infection. Moreover, mtDNA activates molecular processes that lead to inflammation and organ injury.
What Did the Researchers Do and Find?
The researchers measured mtDNA levels in the plasma of patients enrolled in two prospective observational cohort studies that monitored the outcomes of ICU patients. In the Brigham and Women's Hospital Registry of Critical Illness study, blood was taken from 200 patients within 24 hours of admission into the hospital's medical ICU. In the Molecular Epidemiology of Acute Respiratory Distress Syndrome study (acute respiratory distress syndrome is a life-threatening inflammatory reaction to lung damage or infection), blood was taken from 243 patients within 48 hours of admission into medical and non-medical ICUs at two other US hospitals. Patients admitted to medical ICUs with a raised mtDNA level (3,200 or more copies of a specific mitochondrial gene per microliter of plasma) had a 7- to 8-fold increased risk of dying within 28 days of admission compared to patients with mtDNA levels of less than 3,200 copies/µl plasma. There was no evidence of an association between raised mtDNA levels and death among patients admitted to non-medical ICUs. The addition of an elevated mtDNA level to a clinical model for risk prediction that included the APACHE II score and biomarkers that are already used to predict ICU outcomes improved the net reclassification index (an indicator of the improvement in risk prediction algorithms offered by new biomarkers) of 28-day mortality among medical ICU patients in both studies.
What Do These Findings Mean?
These findings indicate that raised mtDNA plasma levels are associated with death in medical ICUs and show that, among patients in medical ICUs, measurement of mtDNA plasma levels can improve the prediction of the risk of death from the APACHE II scoring system, even when commonly measured biomarkers are taken into account. These findings do not indicate whether circulating cell-free mtDNA increased because of the underlying severity of illness or whether mtDNA actively contributes to the disease process in medical ICU patients. Moreover, they do not provide any evidence that raised mtDNA levels are associated with an increased risk of death among non-medical (mainly surgical) ICU patients. These findings need to be confirmed in additional patients, but given the relative ease and rapidity of mtDNA measurement, the determination of circulating cell-free mtDNA levels could be a valuable addition to the assessment of patients admitted to medical ICUs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001577.
The UK National Health Service Choices website provides information about intensive care
The Society of Critical Care Medicine provides information for professionals, families, and patients about all aspects of intensive care
MedlinePlus provides links to other resources about intensive care (in English and Spanish)
The UK charity ICUsteps supports patients and their families through recovery from critical illness; its booklet Intensive Care: A Guide for Patients and Families is available in English and ten other languages; its website includes patient experiences and relative experiences of treatment in ICUs
Wikipedia has a page on ICU scoring systems (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001577
PMCID: PMC3876981  PMID: 24391478
23.  Statistical methods for assessment of added usefulness of new biomarkers 
The discovery and development of new biomarkers continues to be an exciting and promising field. Improvement of prediction of risk of developing disease is one of the key motivations in these pursuits. Appropriate statistical measures are necessary for drawing meaningful conclusions about the clinical usefulness of these new markers. In this review, we present several novel metrics proposed to serve this purpose. We use reclassification tables constructed based on clinically meaningful disease risk categories to discuss the concepts of calibration, risk separation, risk discrimination, and risk classification accuracy. We discuss the notion that the net reclassification improvement is a simple yet informative way to summarize information contained in risk reclassification tables. In the absence of meaningful risk categories, we suggest a ‘category-less’ version of the net reclassification improvement and integrated discrimination improvement as metrics to summarize the incremental value of new biomarkers. We also suggest that predictiveness curves be preferred to receiver-operating-characteristic curves as visual descriptors of a statistical model’s ability to separate predicted probabilities of disease events. Reporting of standard metrics, including measures of relative risk and the c statistic is still recommended. These concepts are illustrated with a risk prediction example using data from the Framingham Heart Study.
doi:10.1515/CCLM.2010.340
PMCID: PMC3155999  PMID: 20716010
reclassification; risk prediction; NRI; IDI; calibration; discrimination
24.  How should we evaluate prediction tools? Comparison of three different tools for prediction of seminal vesicle invasion at radical prostatectomy as a test case 
European urology  2012;62(4):590-596.
Background
Statistical prediction tools are increasingly common in contemporary medicine but there is considerable disagreement about how they should be evaluated. Three tools (Partin tables, the European Society for Urological Oncology (ESUO) criteria and the Gallina nomogram) have been proposed for the prediction of seminal vesicle invasion (SVI) in patients with clinically localized prostate cancer. We aimed to determine which of these tool, if any, should be used clinically.
Methods
The independent validation cohort consisted of 2584 patients treated surgically for clinically localized prostate cancer between 2002 and 2007 at one of four North American tertiary-care referral centers. Traditional (area-under-the-receiver-operating-characteristic-curve (AUC), calibration plots, the Brier score, sensitivity and specificity, positive and negative predictive value) and novel (risk stratification tables, the net reclassification index, decision curve analysis and predictiveness curves) statistical methods quantified the predictive abilities of the three tested models.
Results
Traditional statistical methods (receiver operating characteristic (ROC) plots and Brier scores), as well as two of the novel statistical methods (risk stratification tables and the net reclassification index) could not provide clear distinction between the SVI prediction tools. For example, receiver operating characteristic (ROC) plots and Brier scores seemed biased against the binary decision tool (ESUO criteria) and gave discordant results for the continuous predictions of the Partin tables and the Gallina nomogram. The results of the calibration plots were discordant with those of the ROC plots. Conversely, the decision curve clearly indicated that the Partin tables represent the ideal strategy for stratifying the risk of SVI.
Conclusions
Based on decision curve analysis results, surgeons should consider using the Partin tables to predict SVI. Decision curve analysis provided clinically meaningful comparisons between predictive models; other statistical methods for evaluation of prediction models gave inconsistent results that were difficult to interpret.
doi:10.1016/j.eururo.2012.04.022
PMCID: PMC3674492  PMID: 22561078
prostate; prostatic neoplasms; prostatectomy; seminal vesicles; algorithms; statistics
25.  Predicting the Risk of Rheumatoid Arthritis and Its Age of Onset through Modelling Genetic Risk Variants with Smoking 
PLoS Genetics  2013;9(9):e1003808.
The improved characterisation of risk factors for rheumatoid arthritis (RA) suggests they could be combined to identify individuals at increased disease risks in whom preventive strategies may be evaluated. We aimed to develop an RA prediction model capable of generating clinically relevant predictive data and to determine if it better predicted younger onset RA (YORA). Our novel modelling approach combined odds ratios for 15 four-digit/10 two-digit HLA-DRB1 alleles, 31 single nucleotide polymorphisms (SNPs) and ever-smoking status in males to determine risk using computer simulation and confidence interval based risk categorisation. Only males were evaluated in our models incorporating smoking as ever-smoking is a significant risk factor for RA in men but not women. We developed multiple models to evaluate each risk factor's impact on prediction. Each model's ability to discriminate anti-citrullinated protein antibody (ACPA)-positive RA from controls was evaluated in two cohorts: Wellcome Trust Case Control Consortium (WTCCC: 1,516 cases; 1,647 controls); UK RA Genetics Group Consortium (UKRAGG: 2,623 cases; 1,500 controls). HLA and smoking provided strongest prediction with good discrimination evidenced by an HLA-smoking model area under the curve (AUC) value of 0.813 in both WTCCC and UKRAGG. SNPs provided minimal prediction (AUC 0.660 WTCCC/0.617 UKRAGG). Whilst high individual risks were identified, with some cases having estimated lifetime risks of 86%, only a minority overall had substantially increased odds for RA. High risks from the HLA model were associated with YORA (P<0.0001); ever-smoking associated with older onset disease. This latter finding suggests smoking's impact on RA risk manifests later in life. Our modelling demonstrates that combining risk factors provides clinically informative RA prediction; additionally HLA and smoking status can be used to predict the risk of younger and older onset RA, respectively.
Author Summary
Rheumatoid arthritis (RA) is a common, incurable disease with major individual and health service costs. Preventing its development is therefore an important goal. Being able to predict who will develop RA would allow researchers to look at ways to prevent it. Many factors have been found that increase someone's risk of RA. These are divided into genetic and environmental (such as smoking) factors. The risk of RA associated with each factor has previously been reported. Here, we demonstrate a method that combines these risk factors in a process called “prediction modelling” to estimate someone's lifetime risk of RA. We show that firstly, our prediction models can identify people with very high-risks of RA and secondly, they can be used to identify people at risk of developing RA at a younger age. Although these findings are an important first step towards preventing RA, as only a minority of people tested had substantially increased disease risks our models could not be used to screen the general population. Instead they need testing in people already at risk of RA such as relatives of affected patients. In this context they could identify enough numbers of high-risk people to allow preventive methods to be evaluated.
doi:10.1371/journal.pgen.1003808
PMCID: PMC3778023  PMID: 24068971

Results 1-25 (1403118)