The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.
area under curve; biomarkers; discrimination; risk assessment; risk factors
Understanding the risk for type 2 diabetes (T2D) early in the life course is important for prevention. Whether genetic information improves prediction models for diabetes from adolescence into adulthood is unknown.
With the use of data from 1030 participants in the Bogalusa Heart Study aged 12 to 18 followed into middle adulthood, we built Cox models for incident T2D with risk factors assessed in adolescence (demographics, family history, physical examination, and routine biomarkers). Models with and without a 38 single-nucleotide polymorphism diabetes genotype score were compared by C statistics and continuous net reclassification improvement indices.
Participant mean (± SD) age at baseline was 14.4 ± 1.6 years, and 32% were black. Ninety (8.7%) participants developed T2D over a mean 26.9 ± 5.0 years of follow-up. Genotype score significantly predicted T2D in all models. Hazard ratios ranged from 1.09 per risk allele (95% confidence interval 1.03–1.15) in the basic demographic model to 1.06 (95% confidence interval 1.00–1.13) in the full model. The addition of genotype score did not improve the discrimination of the full clinical model (C statistic 0.756 without and 0.760 with genotype score). In the full model, genotype score had weak improvement in reclassification (net reclassification improvement index 0.261).
Although a genotype score assessed among white and black adolescents is significantly associated with T2D in adulthood, it does not improve prediction over clinical risk factors. Genetic screening for T2D in its current state is not a useful addition to adolescents’ clinical care.
genetic predisposition to disease; diabetes mellitus, type 2; adolescent medicine
Smoking cessation reduces the risks of cardiovascular disease (CVD), but weight gain that follows quitting smoking may weaken the CVD benefit of quitting.
To test the hypothesis that weight gain following smoking cessation does not attenuate the benefits of smoking cessation among people with and without diabetes.
Design, Setting, and Participants
Prospective community-based cohort study using data from the Framingham Offspring Study collected from 1984 to 2011. At each 4-year exam, self-reported smoking status was assessed and categorized as smoker, recent quitter (≤ 4 years), long-term quitter (> 4 years), and non-smoker. Pooled Cox proportional hazards models were used to estimate the association between quitting smoking and 6-year CVD events and to test whether 4-year change in weight following smoking cessation modified the association between smoking cessation and CVD events.
Main outcome measure
Incidence over 6 years of total CVD events, comprising coronary heart disease, cerebrovascular events, peripheral artery disease, and congestive heart failure.
After a mean follow-up of 25 years (SD, 9.6), 631 CVD events occurred among 3251 participants. Median 4-year weight gain was greater for recent quitters without diabetes (2.7 kg, Interquartile range [IQR] −0.5-6.4) and with diabetes (3.6 kg, IQR −1.4-8.2) than for long term quitters (0.9 kg, IQR −1.4-3.2 and 0.0 kg, IQR −3.2-3.2, respectively, p<0.001). Among people without diabetes, age and sex-adjusted incidence rate of CVD was 5.9/ 100 person-exams (95% confidence interval [CI] 4.9-7.1) in smokers, 3.2/ 100 person-exams (95% CI 2.1-4.5) in recent quitters, 3.1 /100 person-exams (95% CI 2.6-3.7) in long-term quitters, and 2.4 /100 person-exams (95% CI 2.0-3.0) in non-smokers. After adjustment for CVD risk factors, compared with smokers, recent quitters had a hazard ratio (HR) for CVD of 0.47 (95% CI, 0.23-0.94) and long-term quitters had an HR of 0.46 (95% CI, 0.34-0.63); these associations had only a minimal change after further adjustment for weight change. Among people with diabetes, there were similar point estimates that did not reach statistical significance.
Conclusions and Relevance
In this community based cohort, smoking cessation was associated with a lower risk of CVD events among participants without diabetes, and weight gain that occurred following smoking cessation did not modify this association. This supports a net cardiovascular benefit of smoking cessation despite subsequent weight gain.
We sought to examine the relation of galectin-3 (Gal-3), a marker of cardiac fibrosis, with incident heart failure (HF) in the community.
Gal-3 is an emerging prognostic biomarker in HF, and experimental studies suggest that Gal-3 is an important mediator of cardiac fibrosis. Whether elevated Gal-3 concentrations precede the development of HF is unknown.
Gal-3 concentrations were measured in 3,353 participants in the Framingham Offspring Cohort (mean age 59 years, 53% women). The relation of Gal-3 to incident HF was assessed using proportional hazards regression.
Gal-3 was associated with increased left ventricular mass in age- and sex-adjusted analyses (P=0.001); this association was attenuated in multivariable analyses (P=0.06). A total of 166 participants developed incident HF and 468 died during a mean follow-up of 8.1 years. Gal-3 was associated with risk of incident HF (HR 1.28 per 1 standard deviation increase in log-Gal-3, 95% CI 1.14–1.43, P<0.0001), and remained significant after adjustment for clinical variables and B-type natriuretic peptide (HR 1.23, 95% CI 1.04–1.47, P=0.02). Gal-3 was also associated with risk of all-cause mortality (multivariable-adjusted HR 1.15, 95% CI 1.04–1.28, P=0.01). The addition of Gal-3 to clinical factors resulted in negligible changes to the c-statistic and minor improvements in the net reclassification index.
Higher concentration of Gal-3, a marker of cardiac fibrosis, is associated with increased risk of incident HF and mortality. Future studies evaluating the role of Gal-3 in cardiac remodeling may provide further insights into the role of Gal-3 in the pathophysiology of HF.
heart failure; epidemiology; biomarker; prognosis
Data regarding the familial aggregation of left ventricular (LV) geometry and its relations to parental heart failure (HF) are limited.
Methods and Results
We evaluated concordance of LV geometry within 1093 nuclear families in 5758 participants of the Original (parents; N=2351) and Offspring (N=3407) cohorts of the Framingham Heart Study undergoing routine echocardiography in mid-to-late adulthood. LV geometry was categorized based on cohort- and sex-specific 80th percentile cutoffs of LV mass and relative wall thickness (RWT) into normal (both <80th percentile), concentric remodeling (LV mass<80th percentile, RWT>80th percentile), concentric hypertrophy (both >80th percentile) and eccentric hypertrophy (LV mass>80th percentile, RWT<80th percentile). Within nuclear families, LV geometry was concordant among related pairs (parent-child, sibling-sibling) (P=0.0015), but not among unrelated spousal pairs (P=0.60), a finding that remained unchanged after adjusting for clinical covariates known to influence LV remodeling (age, systolic blood pressure, body mass index), excluding individuals with prevalent HF and myocardial infarction, and varying the thresholds for defining LV geometry. The prevalence of abnormal LV geometry was higher in family members of affected individuals, with recurrence risks of 1.4 for concentric remodeling (95%CI, 1.2–1.7) and eccentric hypertrophy (95%CI, 1.1–1.8), and 3.9 (95%CI, 3.2–4.6) for concentric hypertrophy. In a subset of 1497 offspring, we observed an association between parental HF (N=458) and eccentric hypertrophy in offspring (P<0.0001).
Our investigation of a two-generational community-based sample demonstrates familial aggregation of LV geometry, with the greatest recurrence risk for concentric LV geometry, and establishes an association of eccentric LV geometry with parental HF.
echocardiography; remodeling; risk factors
Common carotid artery (CCA) intima-media thickness (cIMT), a measure of atherosclerosis, varies between peak-systole (PS) and end-diastole (ED). This difference might affect cardiovascular risk assessment.
Materials and methods
IMT measurements of the right and left CCA were synchronized with an electrocardiogram: R-wave for ED and T-wave for PS. IMT was measured in 2930 members of the Framingham Offspring Study. Multivariable regression models were generated with ED-IMT, PS-IMT and change in IMT as dependent variables and Framingham risk factors as independent variables. ED-IMT estimates were compared to the upper quartile of IMT based on normative data obtained at PS.
The average age of our population was 57.9 years. Average difference in IMT during the cardiac cycle was 0.037 mm (95% CI: 0.035–0.038 mm). ED-IMT and PS-IMT had similar associations with Framingham risk factors (total R2= 0.292 versus 0.275) and were significantly associated with all risk factors. In a fully adjusted multivariable model, a thinner IMT at peak-systole was associated with pulse pressure (p < 0.0001), LDL-cholesterol (p = 0.0064), age (p = 0.046), and no other risk factors. Performing ED-IMT measurements while using upper quartile PS-IMT normative data lead to inappropriately increasing by 42.1% the number of individuals in the fourth IMT quartile (high cardiovascular risk category).
The difference in IMT between peak-systole and end-diastole is associated with pulse pressure, LDL-cholesterol, and age. In our study, mean IMT difference during the cardiac cycle lead to an overestimation by 42.1% of individuals at high risk for cardiovascular disease.
Ultrasonics; Risk Factors; Carotid Arteries; Blood Pressure; systole; diastole
Obesity affects one in three American adult women and is associated with overall mortality and major morbidities. A composite diet index to evaluate total diet quality may better assess the complex relationship between diet and obesity, providing insights for nutrition interventions. The purpose of the present investigation was to determine whether diet quality, defined according to the previously validated Framingham nutritional risk score (FNRS), was associated with the development of overweight or obesity in women. Over 16 years, we followed 590 normal-weight women (BMI < 25 kg/m2), aged 25 to 71 years, of the Framingham Offspring and Spouse Study who presented without CVD, cancer or diabetes at baseline. The nineteen-nutrient FNRS derived from mean ranks of nutrient intakes from 3d dietary records was used to assess nutritional risk. The outcome was development of overweight or obesity (BMI ≥ 25 kg/m2) during follow-up. In a stepwise multiple logistic regression model adjusted for age, physical activity and smoking status, the FNRS was directly related to overweight or obesity (P for trend= 0·009). Women with lower diet quality (i.e. higher nutritional risk scores) were significantly more likely to become overweight or obese (OR 1·76; 95% CI 1·16, 2·69) compared with those with higher diet quality. Diet quality, assessed using a comprehensive composite nutritional risk score, predicted development of overweight or obesity. This finding suggests that overall diet quality be considered a key component in planning and implementing programmes for obesity risk reduction and treatment recommendations.
Diet quality; Nutritional risk score; Obesity; BMI; Dietary quality index
Heart failure (HF) is a major public health burden worldwide. Of patients presenting with HF, 30–55% have a preserved ejection fraction (HFPEF) rather than a reduced ejection fraction (HFREF). Our objective was to examine discriminating clinical features in new-onset HFPEF vs. HFREF.
Methods and results
Of 712 participants in the Framingham Heart Study (FHS) hospitalized for new-onset HF between 1981 and 2008 (median age 81 years, 53% female), 46% had HFPEF (EF >45%) and 54% had HFREF (EF ≤45%). In multivariable logistic regression, coronary heart disease (CHD), higher heart rate, higher potassium, left bundle branch block, and ischaemic electrocardiographic changes increased the odds of HFREF; female sex and atrial fibrillation increased the odds of HFPEF. In aggregate, these clinical features predicted HF subtype with good discrimination (c-statistic 0.78). Predictors were examined in the Enhanced Feedback for Effective Cardiac Treatment (EFFECT) study. Of 4436 HF patients (median age 75 years, 47% female), 32% had HFPEF and 68% had HFREF. Distinguishing clinical features were consistent between FHS and EFFECT, with comparable discrimination in EFFECT (c-statistic 0.75). In exploratory analyses examining the traits of the intermediate EF group (EF 35–55%), CHD predisposed to a decrease in EF, whereas other clinical traits showed an overlapping spectrum between HFPEF and HFREF.
Multiple clinical characteristics at the time of initial HF presentation differed in participants with HFPEF vs. HFREF. While CHD was clearly associated with a lower EF, overlapping characteristics were observed in the middle of the left ventricular EF range spectrum.
Heart failure; Epidemiology; Risk factors; Ejection fraction
The area under the receiver operating characteristics curve (AUC of ROC) is a widely used measure of discrimination in risk prediction models. Routinely, the Mann–Whitney statistics is used as an estimator of AUC, while the change in AUC is tested by the DeLong test. However, very often, in settings where the model is developed and tested on the same dataset, the added predictor is statistically significantly associated with the outcome but fails to produce a significant improvement in the AUC. No conclusive resolution exists to explain this finding. In this paper, we will show that the reason lies in the inappropriate application of the DeLong test in the setting of nested models. Using numerical simulations and a theoretical argument based on generalized U-statistics, we show that if the added predictor is not statistically significantly associated with the outcome, the null distribution is non-normal, contrary to the assumption of DeLong test. Our simulations of different scenarios show that the loss of power because of such a misuse of the DeLong test leads to a conservative test for small and moderate effect sizes. This problem does not exist in cases of predictors that are associated with the outcome and for non-nested models. We suggest that for nested models, only the test of association be performed for the new predictors, and if the result is significant, change in AUC be estimated with an appropriate confidence interval, which can be based on the DeLong approach.
AUC; DeLong test; logistic regression; U-statistics; discrimination; risk prediction
Cardiovascular risk prediction functions offer an important diagnostic tool for clinicians and patients themselves. They are usually constructed with the use of parametric or semi-parametric survival regression models. It is essential to be able to evaluate the performance of these models, preferably with summaries that offer natural and intuitive interpretations. The concept of discrimination, popular in the logistic regression context, has been extended to survival analysis. However, the extension is not unique. In this paper, we define discrimination in survival analysis as the model’s ability to separate those with longer event-free survival from those with shorter event-free survival within some time horizon of interest. This definition remains consistent with that used in logistic regression, in the sense that it assesses how well the model-based predictions match the observed data. Practical and conceptual examples and numerical simulations are employed to examine four C statistics proposed in the literature to evaluate the performance of survival models. We observe that they differ in the numerical values and aspects of discrimination that they capture. We conclude that the index proposed by Harrell is the most appropriate to capture discrimination described by the above definition. We suggest researchers report which C statistic they are using, provide a rationale for their selection, and be aware that comparing different indices across studies may not be meaningful.
discrimination; risk function; censoring; AUC; concordance
Multiple studies have identified single-nucleotide polymorphisms (SNPs) that are associated with coronary heart disease (CHD). We examined whether SNPs selected based on predefined criteria will improve CHD risk prediction when added to traditional risk factors (TRFs).
SNPs were selected from the literature based on association with CHD, lack of association with a known CHD risk factor, and successful replication. A genetic risk score (GRS) was constructed based on these SNPs. Cox proportional hazards model was used to calculate CHD risk based on the Atherosclerosis Risk in Communities (ARIC) and Framingham CHD risk scores with and without the GRS.
The GRS was associated with risk for CHD (hazard ratio [HR] = 1.10; 95% confidence interval [CI]: 1.07–1.13). Addition of the GRS to the ARIC risk score significantly improved discrimination, reclassification, and calibration beyond that afforded by TRFs alone in non-Hispanic whites in the ARIC study. The area under the receiver operating characteristic curve (AUC) increased from 0.742 to 0.749 (Δ= 0.007; 95% CI, 0.004–0.013), and the net reclassification index (NRI) was 6.3%. Although the risk estimates for CHD in the Framingham Offspring (HR = 1.12; 95% CI: 1.10–1.14) and Rotterdam (HR = 1.08; 95% CI: 1.02–1.14) Studies were significantly improved by adding the GRS to TRFs, improvements in AUC and NRI were modest.
Addition of a GRS based on direct associations with CHD to TRFs significantly improved discrimination and reclassification in white participants of the ARIC Study, with no significant improvement in the Rotterdam and Framingham Offspring Studies.
Genetics; Risk factors; Coronary disease
New markers may improve prediction of diagnostic and prognostic outcomes. We review various measures to quantify the incremental value of markers over standard, readily available characteristics. Widely used traditional measures include the improvement in model fit or in the area under the receiver operating characteristic (ROC) curve (AUC). New measures include the net reclassification index (NRI) and decision–analytic measures, such as the fraction of true positive classifications penalized for false positive classifications (‘net benefit’, NB).
For illustration we discuss a case study on the presence of residual tumor versus benign tissue in 544 patients with testicular cancer. We assessed 3 tumor markers (AFP, HCG, and LDH) for their incremental value over currently standard clinical predictors. AUC and R2 values suggested adding continuous LDH and AFP whereas NB only favored HCG as a potentially promising marker at a clinically defendable decision threshold of 20% risk. Results based on the NRI fell in the middle, suggesting reclassification potential of all three markers.
We conclude that improvement in standard discrimination measures, which focus on finding variables that might be promising across all decision thresholds, may not detect the most informative markers at a specific threshold of particular clinical relevance. When a marker is intended to support decision making, calculation of the improvement in a decision–analytic measure, such as NB, is preferable over an overall judgment as obtained from the AUC in ROC analysis.
prediction; logistic regression model; performance measures; incremental value
Little is known about the familial aggregation of intermittent claudication (IC). Our objective was to examine whether parental IC increased adult offspring risk of IC independent of established cardiovascular risk factors. We evaluated Offspring cohort participants of the Framingham Heart Study (FHS) who were 30 years or older, cardiovascular disease (CVD) free, and had both parents enrolled in the FHS (n= 2970 unique participants, 53% women). Pooled proportional hazards regression was used to examine whether the 12 year risk for incident IC in offspring participants was associated with parental IC adjusting for age, sex, diabetes, smoking, systolic blood pressure, total cholesterol, high density lipoprotein (HDL) cholesterol, anti-hypertensive and lipid treatment. Among 909 person-exams in the parental IC history group and 5397 person-exams in the no parental IC history group there were 101 incident IC events (29 with parental IC history, 72 without parental IC history) during follow-up. Age and sex adjusted 12-year cumulative incidence rates per 1000 person-years were 5.08 (95% CI: 2.74; 7.33) and 2.34 (95% CI: 1.46; 3.19) in participants with and without parental IC history. Parental history of IC significantly increased the risk of incident IC in offspring (multivariable adjusted hazard ratio of 1.81, 95% CI 1.14, 2.88). The hazard ratio was unchanged with adjustment for occurrence of CVD (1.83, 95% CI 1.15, 2.91). In conclusion, IC in parents increases risk for IC in adult offspring independent of established risk factors. These data suggest a genetic component of peripheral artery disease and support future research into genetic causes.
claudication; peripheral artery disease; risk factors; family history
The performance of prediction models can be assessed using a variety of different methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration.
Several new measures have recently been proposed that can be seen as refinements of discrimination measures, including variants of the c statistic for survival, reclassification tables, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). Moreover, decision–analytic measures have been proposed, including decision curves to plot the net benefit achieved by making decisions based on model predictions.
We aimed to define the role of these relatively novel approaches in the evaluation of the performance of prediction models. For illustration we present a case study of predicting the presence of residual tumor versus benign tissue in patients with testicular cancer (n=544 for model development, n=273 for external validation).
We suggest that reporting discrimination and calibration will always be important for a prediction model. Decision-analytic measures should be reported if the predictive model is to be used for making clinical decisions. Other measures of performance may be warranted in specific applications, such as reclassification metrics to gain insight into the value of adding a novel predictor to an established model.
Limited data exist regarding the use of a genetic risk score for predicting risk of incident cardiovascular disease (CVD) in US based samples.
Methods and Results
Using findings from recent GWAS, we constructed genetic risk scores (GRS) comprised of 13 genetic variants associated with myocardial infarction (MI) or other manifestations of CHD and 102 genetic variants associated with CHD or its major risk factors. We also updated the 13 SNP GRS with 16 SNPs recently discovered by GWAS. We estimated the association, discrimination and risk reclassification of each GRS for incident cardiovascular events and for prevalent coronary artery calcium (CAC).
In analyses adjusted for age, sex, CVD risk factors and parental history of CVD, the 13 SNP GRS was significantly associated with incident hard CHD (HR 1.07, 95% CI 1.00-1.15, p=0.04), CVD (hazard ratio [HR] per-allele 1.05, 95% confidence interval [CI] 1.01-1.09; p=0.03), and high CAC (defined as >75th age and sex-specific percentile; odds ratio [OR] per-allele 1.18, 95% CI 1.11-1.26, p=3.4 × 10-7). The GRS did not improve discrimination for incident CHD or CVD but led to modest improvements in risk reclassification. However, significant improvements in discrimination and risk reclassification were observed for the prediction of high CAC. The addition of 16 newly discovered SNPs to the 13 SNP GRS did not significantly modify these results.
A GRS comprised of 13 SNPs associated with coronary disease is an independent predictor of cardiovascular events and of high CAC, modestly improves risk reclassification for incident CHD and significant improves discrimination for high CAC. The addition of recently discovered SNPs did not significantly improve the performance of this GRS.
Genetics; single nucleotide polymorphisms; cardiovascular disease; coronary heart disease; risk prediction; reclassification
Net reclassification and integrated discrimination improvements have been proposed as alternatives to the increase in the AUC for evaluating improvement in the performance of risk assessment algorithms introduced by the addition of new phenotypic or genetic markers. In this paper, we demonstrate that in the setting of linear discriminant analysis, under the assumptions of multivariate normality, all three measures can be presented as functions of the squared Mahalanobis distance. This relationship affords an interpretation of the magnitude of these measures in the familiar language of effect size for uncorrelated variables. Furthermore, it allows us to conclude that net reclassification improvement can be viewed as a universal measure of effect size. Our theoretical developments are illustrated with an example based on the Framingham Heart Study risk assessment model for high risk men in primary prevention of cardiovascular disease.
AUC; biomarker; c statistic; model performance; risk prediction; ROC
The discovery and development of new biomarkers continues to be an exciting and promising field. Improvement of prediction of risk of developing disease is one of the key motivations in these pursuits. Appropriate statistical measures are necessary for drawing meaningful conclusions about the clinical usefulness of these new markers. In this review, we present several novel metrics proposed to serve this purpose. We use reclassification tables constructed based on clinically meaningful disease risk categories to discuss the concepts of calibration, risk separation, risk discrimination, and risk classification accuracy. We discuss the notion that the net reclassification improvement is a simple yet informative way to summarize information contained in risk reclassification tables. In the absence of meaningful risk categories, we suggest a ‘category-less’ version of the net reclassification improvement and integrated discrimination improvement as metrics to summarize the incremental value of new biomarkers. We also suggest that predictiveness curves be preferred to receiver-operating-characteristic curves as visual descriptors of a statistical model’s ability to separate predicted probabilities of disease events. Reporting of standard metrics, including measures of relative risk and the c statistic is still recommended. These concepts are illustrated with a risk prediction example using data from the Framingham Heart Study.
reclassification; risk prediction; NRI; IDI; calibration; discrimination
Impaired vasodilator function is an early manifestation of coronary artery disease and may precede angiographic stenosis. It is unknown whether non-invasive assessment of coronary vasodilator function in patients with suspected or known coronary artery disease (CAD) carries incremental prognostic significance.
Methods and Results
2783 consecutive patients referred for rest/stress PET were followed for a median of 1.4 years (inter-quartile range: 0.7–3.2 years). The extent and severity of perfusion abnormalities were quantified by visual evaluation of myocardial perfusion images (MPI). Rest and stress myocardial blood flow (MBF) were calculated using factor analysis and a 2-compartment kinetic model, and were used to compute coronary flow reserve (CFR=stress/rest MBF). The primary endpoint was cardiac death. Overall 3-year cardiac mortality was 8.0%. The lowest tertile of CFR (<1.5) was associated with a 5.6-fold increase in the risk of cardiac death (95%CI 2.5–12.4, p<0.0001) compared to the highest tertile. Incorporation of CFR into cardiac death risk assessment models resulted in an increase in the c-index from 0.82 (95%CI 0.78–0.86) to 0.84 (95%CI 0.80–0.87, p=0.02) and in a net reclassification improvement (NRI) of 0.098 (95%CI 0.025–0.180). Addition of CFR resulted in correct reclassification of 34.8% of intermediate risk patients (NRI=0.487, 95%CI 0.262–0.731). Corresponding improvements in risk assessment for mortality from any cause were also demonstrated.
Non-invasive quantitative assessment of coronary vasodilator function using PET is a powerful, independent predictor of cardiac mortality in patients with known or suspected CAD and provides meaningful incremental risk stratification over clinical and gated MPI variables.
coronary disease; blood flow; imaging; atherosclerosis; ischemia
Carotid artery intima-media thickness (IMT) is a marker of cardiovascular disease associated with incident stroke. We study whether IMT rate-of-change is associated with stroke.
Materials and Methods
We studied 5028 participants of the Multi-Ethnic Study of Atherosclerosis (MESA) composed of whites, Chinese, Hispanic and African-Americans free of cardiovascular disease. In this MESA IMT progression study, IMT rate-of-change (mm/year) was the difference in right common carotid artery (CCA) far-wall IMT (mm) divided by the interval between two ultrasound examinations (median interval of 32 months). CCA IMT was measured in a region free of plaque. Cardiovascular risk factors and baseline IMT were determined when IMT rate-of-change was measured. Multivariable Cox proportional hazards models generated Hazard risk Ratios (HR) with cardiovascular risk factors, ethnicity and education level/income as predictors.
There were 42 first time strokes seen during a mean follow-up of 3.22 years (median 3.0 years). Average age was 64.2 years, with 48% males. In multivariable models, age (HR: 1.05 per year), systolic blood pressure (HR 1.02 per mmHg), lower HDL cholesterol levels (HR: 0.96 per mg/dL) and IMT rate-of-change (HR 1.23 per 0.05 mm/year; 95% C.L. 1.02, 1.48) were significantly associated with incident stroke. The upper quartile of IMT rate-of-change had an HR of 2.18 (95% C.L.: 1.07, 4.46) compared to the lower three quartiles combined.
Common carotid artery IMT progression is associated with incident stroke in this cohort free of prevalent cardiovascular disease and atrial fibrillation at baseline.
Ultrasonography; Risk Factors; Carotid Arteries; Carotid Intima Media Thickness; stroke
Atrial fibrillation (AF) patterns and their relations with long‐term prognosis are uncertain, partly because pattern definitions are challenging to implement in longitudinal data sets. We developed a novel AF classification algorithm and examined AF patterns and outcomes in the community.
Methods and Results
We characterized AF patterns between 1980 and 2005 among Framingham Heart Study participants who survived ≥1 year after diagnosis. We classified participants based on their pattern within the first 2 years after detection as having AF without recurrence, recurrent AF, or sustained AF. We examined associations between AF patterns and 10‐year survival using proportional hazards regression. Among 612 individuals with AF, mean age was 72.5±10.8 years, and 53% were men. Of these, 478 participants had ≥2 electrocardiograms (median, 3; limits 2 to 23) within 2 years after initial AF and were classified as having AF without 2‐year recurrence (n=63, 10%), recurrent AF (n=162, 26%) or sustained AF (n=207, 34%), although some (n=46, 8%) were indeterminate. Of 432 classified participants, 363 died, 75 had strokes, and 110 were diagnosed with heart failure during the next 10 years. Relative to individuals without AF recurrence, the multivariable‐adjusted mortality was higher among people with recurrent AF (hazard ratio [HR], 2.04; 95% confidence interval [CI], 1.26 to 3.29) and sustained AF (HR, 2.36; 95% CI, 1.49 to 3.75).
In our community‐based AF sample, only 10% had AF without early‐term (2‐year) recurrence. Compared with individuals without 2‐year AF recurrences, the 10‐year prognosis was worse for individuals with either sustained or recurrent AF. Our proposed AF classification algorithm may be applicable in longitudinal datas ets.
atrial fibrillation; heart failure; mortality; pattern; risk; stroke
Salt sensitivity, a trait characterized by a pressor blood pressure (BP) response to increased dietary salt intake, has been associated with higher rates of cardiovascular target organ damage and cardiovascular disease events. Recent experimental studies have highlighted the potential role of the natriuretic peptides and aldosterone in mediating salt sensitivity.
Methods and Results:
We evaluated 1575 non-hypertensive Framingham Offspring cohort participants (mean age 55±9 years, 58% women) who underwent routine measurements of circulating aldosterone and N-terminal proatrial natriuretic peptide (NT-ANP) and assessment of dietary sodium intake. Participants were categorized as potentially ‘salt-sensitive’ if their serum aldosterone was >sex-specific median but plasma NT-ANP was ≤sex-specific median value. Dietary sodium intake was categorized as lower versus higher (dichotomized at the sex-specific median). We used multivariable linear regression to relate presence of salt sensitivity (as defined above) to longitudinal changes (Δ) in systolic and diastolic BP on follow-up (median 4 years). Participants who were ‘salt-sensitive’ (N=437) experienced significantly greater increases in BP (Δ systolic, +4.4 and +2.3 mmHg; Δ diastolic, +1.9 and −0.3 mmHg; on a higher versus lower sodium diet, respectively) as compared to the other participants (Δ systolic, +2.8 and +1.0 mmHg; Δ diastolic, +0.5 and −0.2 mmHg; on higher versus lower sodium diet, respectively; p=0.033 and p=0.0127 for differences between groups in Δ systolic and Δ diastolic BP, respectively).
Our observational data suggest that higher circulating aldosterone and lower NT-ANP concentrations may be markers of salt sensitivity in the community. Additional studies are warranted to confirm these observations.
salt sensitivity; aldosterone; N-terminal proatrial natriuretic peptide; ANP
Carotid intima-media thickness (IMT) is a marker of cardiovascular disease derived from ultrasound images of the carotid artery. In most outcome studies, human readers identify and trace the key IMT interfaces. We evaluate an alternate approach using automated edge detection.
We study a subset of 5640 participants with an average age 61.7 years (48% men) of the Multi-Ethnic Study of Atherosclerosis composed of whites, Chinese, Hispanic and African-Americans that are part of the MESA IMT progression study. Manual tracing IMT (mt_IMT) and edge-detected IMT (ed_IMT) measurements of the far wall of the common carotid artery (CCA) served as outcome variables for multivariable linear regression models using Framingham cardiovascular risk factors and ethnicity as independent predictors.
Measurements of mt_IMT was obtainable in 99.9% (5633/5640) and of ed_IMT in 98.9% (5579/5640) of individuals. Average ed_IMT was 0.19 mm larger than mt_IMT. Inter-reader systematic differences (bias) in IMT measurements were apparent for mt_IMT but not ed_IMT. Based on complete data on 5538 individuals, associations of IMT with risk factors were stronger (p < 0.0001) for mt_IMT (model r2: 19.5%) than ed_IMT (model r2: 18.5%).
We conclude that this edge-detection process generates IMT values equivalent to manually traced ones since it preserves key associations with cardiovascular risk factors. It also decreases inter-reader bias, potentially making it applicable for use in cardiovascular risk assessment.
Ultrasonography; Risk Factors; Carotid Arteries; Carotid Intima Media Thickness
For modern evidence-based medicine, a well thought-out risk scoring system for predicting the occurrence of a clinical event plays an important role in selecting prevention and treatment strategies. Such an index system is often established based on the subject’s “baseline” genetic or clinical markers via a working parametric or semi-parametric model. To evaluate the adequacy of such a system, C-statistics are routinely used in the medical literature to quantify the capacity of the estimated risk score in discriminating among subjects with different event times. The C-statistic provides a global assessment of a fitted survival model for the continuous event time rather than focuses on the prediction of t-year survival for a fixed time. When the event time is possibly censored, however, the population parameters corresponding to the commonly used C-statistics may depend on the study-specific censoring distribution. In this article, we present a simple C-statistic without this shortcoming. The new procedure consistently estimates a conventional concordance measure which is free of censoring. We provide a large sample approximation to the distribution of this estimator for making inferences about the concordance measure. Results from numerical studies suggest that the new procedure performs well in finite sample.
AUC; Cox’s proportional hazards model; Framingham risk score; ROC
Appropriate quantification of added usefulness offered by new markers included in risk prediction algorithms is a problem of active research and debate. Standard methods, including statistical significance and c statistic are useful but not sufficient. Net reclassification improvement (NRI) offers a simple intuitive way of quantifying improvement offered by new markers and has been gaining popularity among researchers. However, several aspects of the NRI have not been studied in sufficient detail.
In this paper we propose a prospective formulation for the NRI which offers immediate application to survival and competing risk data as well as allows for easy weighting with observed or perceived costs. We address the issue of the number and choice of categories and their impact on NRI. We contrast category-based NRI with one which is category-free and conclude that NRIs cannot be compared across studies unless they are defined in the same manner. We discuss the impact of differing event rates when models are applied to different samples or definitions of events and durations of follow-up vary between studies. We also show how NRI can be applied to case-control data. The concepts presented in the paper are illustrated in a Framingham Heart Study example.
In conclusion, NRI can be readily calculated for survival, competing risk, and case-control data, is more objective and comparable across studies using the category-free version, and can include relative costs for classifications. We recommend that researchers clearly define and justify the choices they make when choosing NRI for their application.
discrimination; model performance; NRI; risk prediction; biomarker