Cardiovascular disease (CVD) risk assessment tools such as the Framingham Risk Functions, often called Framingham Risk Scores, are common in the evaluation of the CVD risk among individuals in the general population. These functions are multivariate risk algorithms that combine data on CVD risk factors, such as sex, age, systolic blood pressure, total cholesterol level, high-density lipoprotein cholesterol level, smoking behavior, and diabetes status, to produce an estimate (or risk) of developing CVD or a component of it (such as coronary heart disease, stroke, peripheral vascular disease, and heart failure) over a fixed period (eg, the next 10 years). These estimates of CVD risk are often major inputs in recommending drug treatments, such as agents to reduce cholesterol level. The Framingham Risk Functions are valid in diverse populations, at times requiring a calibration adjustment for proper applicability. With the realization that individuals with human immunodeficiency virus (HIV) infection often have elevated CVD risk factors, the evaluation of CVD risk for these individuals becomes a serious concern. Researchers have recently developed new CVD risk functions specifically for HIV-infected patients and have also examined the extension of existing Framingham Risk Functions to the HIV-infected population. This article first reviews briefly the Framingham Study and risk functions, covering their objectives, their components, evaluation of their performance, and transportability and validity on non-Framingham populations. It then reviews the development of CVD risk functions for HIV-infected individuals and comments on the usefulness of extending the Framingham risk equation to the HIV-infected population and the need to develop more-specific risk prediction equations uniquely tailored to this population.
This study compared heart rate variability (HRV) parameters in youth with and without type 1 diabetes and explored potential contributors of altered HRV.
RESEARCH DESIGN AND METHODS
HRV parameters were measured among 354 youth with type 1 diabetes (mean age 18.8 years, diabetes duration 9.8 years, and mean A1C 8.9%) and 176 youth without diabetes (mean age 19.2 years) participating in the SEARCH CVD study. Multiple linear regression was used to assess the relationship between diabetes status and HRV parameters, adjusting for covariates.
Compared with control subjects, youth with type 1 diabetes had reduced overall HRV (10.09 ms lower SD of NN intervals [SDNN]) and markers of parasympathetic loss (13.5 ms reduced root mean square successive difference of NN intervals [RMSSD] and 5.2 normalized units (n.u.) reduced high frequency [HF] power) with sympathetic override (5.2 n.u. increased low frequency [LF] power), independent of demographic, anthropometric, and traditional cardiovascular risk factors. Older age, female sex, higher LDL cholesterol and triglyceride levels, and presence of microalbuminuria were independently associated with lower HRV but did not account for the observed differences between youth with and without diabetes. Youth with type 1 diabetes and A1C levels ≥7.5% had significantly worse HRV parameters than control subjects; however, in youth with optimal glycemic control (A1C <7.5%), HRV parameters did not differ significantly from control subjects.
Youth with type 1 diabetes have signs of early cardiac autonomic neuropathy: reduced overall HRV and parasympathetic loss with sympathetic override. The main driver of these subclinical abnormalities appears to be hyperglycemia.
Direct-to-consumer marketing efforts, such as community-supported agriculture (CSA), have been proposed as a solution for disparities in fruit and vegetable consumption. Evaluations of such efforts have been limited. The objective of this study was to test the feasibility of a CSA intervention to increase household inventory of fruits and vegetables and fruit and vegetable consumption of residents of an underresourced community.
For this randomized, controlled feasibility study, we recruited 50 low-income women with children. Intervention (n = 25) participants were offered 5 educational sessions and a box of fresh produce for 16 weeks; control participants were not offered the sessions nor were they included in the produce delivery. We collected data on participants’ home inventory of fruits and vegetables and on their consumption of fruits and vegetables at baseline (May 2012) and postintervention (August and September 2012).
Of 55 potential participants, 50 were enrolled and 44 were reached for follow-up. We observed a significant increase in the number of foods in the household inventory of fruits and vegetables in the intervention group compared with the control group. The intervention group reported greater increases in fruit and vegetable consumption; however, these did not reach significance. Intervention participants picked up produce 9.2 (standard deviation = 4.58) of 16 weeks; challenges included transportation and work schedules. Most participants (20 of 21) expressed interest in continued participation; all stated a willingness to pay $10 per week, and some were willing to pay as much as $25 per week.
CSA is a feasible approach for providing fresh fruits and vegetables to an underresourced community. Future studies should evaluate the impact of such a program in a larger sample and should take additional steps to facilitate participation.
We compared two protocols for measuring waist circumference (WC) in a sample of youth with diabetes.
Participants were enrolled in the SEARCH for Diabetes in Youth Study (SEARCH). WC was measured at least twice by the National Health and Nutrition Examination Survey (NHANES) protocol and twice by the World Health Organization (WHO) protocol. Method-specific averages were used in these analyses.
Among 6248 participants, the mean NHANES WC (76.3 cm) was greater than the mean WHO WC (71.9 cm). Discrepancies between protocols were greater for females than males, among older participants, and in those with higher body mass index (BMI). In both sexes and four age strata, the WCs using either method were highly correlated with BMI z-score. The within-method differences between the first and second measurements were similar for the two methods.
These analyses do not provide evidence that one of these two methods is more reproducible or is a better indicator of obesity as defined by BMI z-scores.
waist circumference measurement; method comparison; diabetes in youth
Several germline single nucleotide polymorphisms (SNPs) have been consistently associated with prostate cancer (PCa) risk.
To determine whether there is an improvement in PCa risk prediction by adding these SNPs to existing predictors of PCa.
Design, setting, and participants
Subjects included men in the placebo arm of the randomized Reduction by Dutasteride of Prostate Cancer Events (REDUCE) trial in whom germline DNA was available. All men had an initial negative prostate biopsy and underwent study-mandated biopsies at 2 yr and 4 yr. Predictive performance of baseline clinical parameters and/or a genetic score based on 33 established PCa risk-associated SNPs was evaluated.
Outcome measurements and statistical analysis
Area under the receiver operating characteristic curves (AUC) were used to compare different models with different predictors. Net reclassification improvement (NRI) and decision curve analysis (DCA) were used to assess changes in risk prediction by adding genetic markers.
Results and limitations
Among 1654 men, genetic score was a significant predictor of positive biopsy, even after adjusting for known clinical variables and family history (p = 3.41 × 10−8). The AUC for the genetic score exceeded that of any other PCa predictor at 0.59. Adding the genetic score to the best clinical model improved the AUC from 0.62 to 0.66 (p < 0.001), reclassified PCa risk in 33% of men (NRI: 0.10; p = 0.002), resulted in higher net benefit from DCA, and decreased the number of biopsies needed to detect the same number of PCa instances. The benefit of adding the genetic score was greatest among men at intermediate risk (25th percentile to 75th percentile). Similar results were found for high-grade (Gleason score ≥7) PCa. A major limitation of this study was its focus on white patients only.
Adding genetic markers to current clinical parameters may improve PCa risk prediction. The improvement is modest but may be helpful for better determining the need for repeat prostate biopsy. The clinical impact of these results requires further study.
Prostate cancer; Genetics; AUC; Detection rate; Reclassification; SNPs; Prospective study; Clinical trial
Smoking cessation reduces the risks of cardiovascular disease (CVD), but weight gain that follows quitting smoking may weaken the CVD benefit of quitting.
To test the hypothesis that weight gain following smoking cessation does not attenuate the benefits of smoking cessation among people with and without diabetes.
Design, Setting, and Participants
Prospective community-based cohort study using data from the Framingham Offspring Study collected from 1984 to 2011. At each 4-year exam, self-reported smoking status was assessed and categorized as smoker, recent quitter (≤ 4 years), long-term quitter (> 4 years), and non-smoker. Pooled Cox proportional hazards models were used to estimate the association between quitting smoking and 6-year CVD events and to test whether 4-year change in weight following smoking cessation modified the association between smoking cessation and CVD events.
Main outcome measure
Incidence over 6 years of total CVD events, comprising coronary heart disease, cerebrovascular events, peripheral artery disease, and congestive heart failure.
After a mean follow-up of 25 years (SD, 9.6), 631 CVD events occurred among 3251 participants. Median 4-year weight gain was greater for recent quitters without diabetes (2.7 kg, Interquartile range [IQR] −0.5-6.4) and with diabetes (3.6 kg, IQR −1.4-8.2) than for long term quitters (0.9 kg, IQR −1.4-3.2 and 0.0 kg, IQR −3.2-3.2, respectively, p<0.001). Among people without diabetes, age and sex-adjusted incidence rate of CVD was 5.9/ 100 person-exams (95% confidence interval [CI] 4.9-7.1) in smokers, 3.2/ 100 person-exams (95% CI 2.1-4.5) in recent quitters, 3.1 /100 person-exams (95% CI 2.6-3.7) in long-term quitters, and 2.4 /100 person-exams (95% CI 2.0-3.0) in non-smokers. After adjustment for CVD risk factors, compared with smokers, recent quitters had a hazard ratio (HR) for CVD of 0.47 (95% CI, 0.23-0.94) and long-term quitters had an HR of 0.46 (95% CI, 0.34-0.63); these associations had only a minimal change after further adjustment for weight change. Among people with diabetes, there were similar point estimates that did not reach statistical significance.
Conclusions and Relevance
In this community based cohort, smoking cessation was associated with a lower risk of CVD events among participants without diabetes, and weight gain that occurred following smoking cessation did not modify this association. This supports a net cardiovascular benefit of smoking cessation despite subsequent weight gain.
Cluster analysis is a valuable tool for exploring the health consequences of consuming different dietary patterns. We used this approach to examine the cross-sectional relationship between dietary patterns and insulin resistance phenotypes, including waist circumference, body mass index (BMI), fasting insulin, 2-h post-challenge insulin, insulin sensitivity index (ISI0,120), HDL cholesterol, triacylglycerol and blood pressure, using data from the fifth examination cycle of the Framingham Offspring Study. Among 2,875 participants without diabetes, we identified four dietary patterns based on the predominant sources of energy: “Fruits, Reduced Fat Dairy and Whole Grains”, “Refined Grains and Sweets”, “Beer”, and “Soda”. After adjusting for multiple comparisons and potential confounders, compared with the “Fruits, Reduced Fat Dairy and Whole Grains” pattern, the “Refined Grains and Sweets” pattern had significantly higher mean waist circumference (92.4 versus 90.5 cm, P=0.008) and BMI (27.3 versus 26.6 kg/m2, P=0.02); the “Soda” pattern had significantly higher mean fasting insulin concentration (31.3 versus 28.0 μU/ml, P≤0.001); the “Beer” pattern had significantly higher mean HDL cholesterol concentration (1.46 versus 1.31 mmol/l, P<0.001). No associations were observed between dietary patterns and ISI0,120, triacylglycerol, and systolic or diastolic blood pressure. Our findings suggest that consumption of a diet rich in fruits, vegetables, whole grains and reduced fat dairy protects against insulin resistance phenotypes and displacing these healthy choices with refined grains, high fat dairy, sweet baked foods, candy and sugar sweetened soda promotes insulin resistant phenotypes.
Dietary patterns; cluster analysis; insulin resistance phenotypes; Framingham Offspring Study
The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.
area under curve; biomarkers; discrimination; risk assessment; risk factors
Common carotid artery (CCA) intima-media thickness (cIMT), a measure of atherosclerosis, varies between peak-systole (PS) and end-diastole (ED). This difference might affect cardiovascular risk assessment.
Materials and methods
IMT measurements of the right and left CCA were synchronized with an electrocardiogram: R-wave for ED and T-wave for PS. IMT was measured in 2930 members of the Framingham Offspring Study. Multivariable regression models were generated with ED-IMT, PS-IMT and change in IMT as dependent variables and Framingham risk factors as independent variables. ED-IMT estimates were compared to the upper quartile of IMT based on normative data obtained at PS.
The average age of our population was 57.9 years. Average difference in IMT during the cardiac cycle was 0.037 mm (95% CI: 0.035–0.038 mm). ED-IMT and PS-IMT had similar associations with Framingham risk factors (total R2= 0.292 versus 0.275) and were significantly associated with all risk factors. In a fully adjusted multivariable model, a thinner IMT at peak-systole was associated with pulse pressure (p < 0.0001), LDL-cholesterol (p = 0.0064), age (p = 0.046), and no other risk factors. Performing ED-IMT measurements while using upper quartile PS-IMT normative data lead to inappropriately increasing by 42.1% the number of individuals in the fourth IMT quartile (high cardiovascular risk category).
The difference in IMT between peak-systole and end-diastole is associated with pulse pressure, LDL-cholesterol, and age. In our study, mean IMT difference during the cardiac cycle lead to an overestimation by 42.1% of individuals at high risk for cardiovascular disease.
Ultrasonics; Risk Factors; Carotid Arteries; Blood Pressure; systole; diastole
Insulin resistance is thought to mediate the association between obesity and colorectal neoplasia, but no prior studies have assessed stimulated insulin sensitivity as a risk factor for colorectal neoplasia. This prospective study examined the association between insulin sensitivity measured directly using the frequently sampled intravenous glucose tolerance test (FSIGT) and later risk of colorectal adenomas. Among participants with a range of glucose tolerance levels enrolled in the Insulin Resistance Atherosclerosis Study, colonoscopies were conducted on 600 participants ages ≥ 50 yr, regardless of symptoms, about 10 yr after the first FSIGT and 5 yr after the second. Multiple logistic regression analyses were used. Within this cohort, diabetes was not associated with colorectal adenoma risk [~10 yr prior to colonoscopy adjusted odds ratio (ORadj) 1.00; 95% confidence interval (CI), 0.62–1.62 or ~5 yr prior to colonoscopy ORadj 0.96; 95% CI, 0.62–1.50]. Among non-diabetic participants, insulin sensitivity was not associated with colorectal adenoma risk at either prior study visit [lowest vs. highest insulin sensitivity, ~10 yr prior to colonoscopy ORadj 0.93; 95% CI 0.50–1.71 and ~5 yr prior to colonscopy ORadj 0.74; 95% CI, 0.38–1.46]. These results suggest that factors other than insulin sensitivity mediate the relationship between obesity and colorectal neoplasia.
Background / Objectives
Diet quality indices are increasingly used in nutrition epidemiology as dietary exposures in relation to health outcomes. However, literature on long-term stability of these indices is limited. We aimed to assess the stability of the validated Framingham Nutritional Risk Score (FNRS) and its component nutrients over 8 years as well as the validity of the follow-up FNRS.
Subjects / Methods
Framingham Offspring/Spouse Study women and men (n=1 734) aged 22-76 years wwver 8 years. Individuals' nutrient intake and nutritional risk scores were assessed using 3-day dietary records administered at baseline (1984-1988) and at follow-up (1992-1996). Agreement between baseline and follow-up FNRS and nutrient intakes was evaluated using Bland-Altman method; stability was assessed using intra-class correlation (ICC) and weighted Kappa statistics. The effect of diet quality (as assessed by the FNRS) on cardiometabolic risk factors was evaluated using ANCOVA.
Modest changes from baseline (≤15%) were observed in nutrient intake. Stability coefficients for the FNRS (ICC: women=0.49; men=0.46; P<0.0001) and many nutrients (ICC ≥0.3) were moderate. Over half of women and men (58%) remained in the same or contiguous baseline and follow-up quartile of the FNRS and few (3-4%) shifted >1 quartile. The FNRS was directly associated with BMI in women (P<0.01) and HDL-cholesterol among both women (P<0.001) and men (P<0.01).
The FNRS and its constituent nutrients remained relatively stable over 8 years of follow-up. The stability of diet quality has implications for prospective epidemiological investigations.
long-term stability; dietary quality indices; nutrients
Oxidative damage has been implicated in carcinogenesis. We hypothesized that elevated systemic oxidative status would be associated with later occurrence of colorectal adenomatous polyps, a precursor of colorectal cancer.
We examined the prospective association between four systemic markers of oxidative status and colorectal adenomatous polyps within a non-diabetic sub-cohort of the Insulin Resistance Atherosclerosis Study (IRAS) (n=425). Urine samples were collected from 1992–1994 and colorectal adenomas prevalence were assessed in 2002–2004. Oxidative status markers were assessed, which included four F2-isoprostanes (F2-IsoPs) from the classes III and IV: iPF2α-III, 2,3-dinor-iPF2α-III (a metabolite of iPF2α-III), iPF2α-VI, and 8,12-iso-iPF2α-VI. All biomarkers were quantified using liquid chromatography–tandem mass spectrometry. Prospective associations were assessed using multivariate logistic regression analysis.
The adjusted ORs (95% CIs) for occurrence of colorectal adenomatous polyps and scaled to 1 SD of F2-IsoP distribution were 1.16 (0.88–1.50), 0.88 (0.63–1.17), 1.04 (0.80–1.34), and 1.16 (0.90–1.48) for iPF2α-III, iPF2α-VI, 8,12-iso-iPF2α-VI, and 2,3-dinor-iPF2α-III, respectively.
The lack of association between F2-IsoPs and adenomatous polyps does not support the hypothesis that elevated oxidative status is associated with colorectal adenomatous polyp occurrence during a 10-year period of follow-up.
oxidative stress; biomarkers; F2-isoprostanes; adenomatous polyps; adenoma; colorectal cancer; epidemiology
Obesity affects one in three American adult women and is associated with overall mortality and major morbidities. A composite diet index to evaluate total diet quality may better assess the complex relationship between diet and obesity, providing insights for nutrition interventions. The purpose of the present investigation was to determine whether diet quality, defined according to the previously validated Framingham nutritional risk score (FNRS), was associated with the development of overweight or obesity in women. Over 16 years, we followed 590 normal-weight women (BMI < 25 kg/m2), aged 25 to 71 years, of the Framingham Offspring and Spouse Study who presented without CVD, cancer or diabetes at baseline. The nineteen-nutrient FNRS derived from mean ranks of nutrient intakes from 3d dietary records was used to assess nutritional risk. The outcome was development of overweight or obesity (BMI ≥ 25 kg/m2) during follow-up. In a stepwise multiple logistic regression model adjusted for age, physical activity and smoking status, the FNRS was directly related to overweight or obesity (P for trend= 0·009). Women with lower diet quality (i.e. higher nutritional risk scores) were significantly more likely to become overweight or obese (OR 1·76; 95% CI 1·16, 2·69) compared with those with higher diet quality. Diet quality, assessed using a comprehensive composite nutritional risk score, predicted development of overweight or obesity. This finding suggests that overall diet quality be considered a key component in planning and implementing programmes for obesity risk reduction and treatment recommendations.
Diet quality; Nutritional risk score; Obesity; BMI; Dietary quality index
Heart failure (HF) is a major public health burden worldwide. Of patients presenting with HF, 30–55% have a preserved ejection fraction (HFPEF) rather than a reduced ejection fraction (HFREF). Our objective was to examine discriminating clinical features in new-onset HFPEF vs. HFREF.
Methods and results
Of 712 participants in the Framingham Heart Study (FHS) hospitalized for new-onset HF between 1981 and 2008 (median age 81 years, 53% female), 46% had HFPEF (EF >45%) and 54% had HFREF (EF ≤45%). In multivariable logistic regression, coronary heart disease (CHD), higher heart rate, higher potassium, left bundle branch block, and ischaemic electrocardiographic changes increased the odds of HFREF; female sex and atrial fibrillation increased the odds of HFPEF. In aggregate, these clinical features predicted HF subtype with good discrimination (c-statistic 0.78). Predictors were examined in the Enhanced Feedback for Effective Cardiac Treatment (EFFECT) study. Of 4436 HF patients (median age 75 years, 47% female), 32% had HFPEF and 68% had HFREF. Distinguishing clinical features were consistent between FHS and EFFECT, with comparable discrimination in EFFECT (c-statistic 0.75). In exploratory analyses examining the traits of the intermediate EF group (EF 35–55%), CHD predisposed to a decrease in EF, whereas other clinical traits showed an overlapping spectrum between HFPEF and HFREF.
Multiple clinical characteristics at the time of initial HF presentation differed in participants with HFPEF vs. HFREF. While CHD was clearly associated with a lower EF, overlapping characteristics were observed in the middle of the left ventricular EF range spectrum.
Heart failure; Epidemiology; Risk factors; Ejection fraction
To determine change in the prevalence of functional limitations and physical disability in community-dwelling elders across three decades.
We studied original participants of the Framingham Study, aged 79 to 88 years, at exam 15 (1977–1979, 177 women, 103 men), exam 20 (1988–1990, 159 women, 98 men) and exam 25 (1997 to 1999, 174 women, 119 men). Self-reported 1) functional limitation defined using the Nagi scale and 2) physical disability defined using the Rosow-Breslau and Katz scales.
Functional limitations declined across examinations from 74.6% to 60.5% to 37.9% (p< 0.001) in women and 54.2%, 37.8%, and 27.8% (p<0.001) in men. Physical disability declined from 74.5% to 48.5% to 34.6% (p< 0.001) in women and 42.3% to 33.3% to 22.8% (p=0.009) in men. Women had a greater decline in disability than men (p=0.03). In women, improvements in functional limitations (p=0.05) were greater from exam 20 to 25 whereas for physical disability (p=0.02) improvements were greater from exam 15 to 20. Improvements in function were constant across the three examinations in men.
Among community-dwelling elders the prevalence of functional limitations and physical disability declined significantly from the 1970s to the 1990s.
functional limitations; physical disability; trends; elders
The area under the receiver operating characteristics curve (AUC of ROC) is a widely used measure of discrimination in risk prediction models. Routinely, the Mann–Whitney statistics is used as an estimator of AUC, while the change in AUC is tested by the DeLong test. However, very often, in settings where the model is developed and tested on the same dataset, the added predictor is statistically significantly associated with the outcome but fails to produce a significant improvement in the AUC. No conclusive resolution exists to explain this finding. In this paper, we will show that the reason lies in the inappropriate application of the DeLong test in the setting of nested models. Using numerical simulations and a theoretical argument based on generalized U-statistics, we show that if the added predictor is not statistically significantly associated with the outcome, the null distribution is non-normal, contrary to the assumption of DeLong test. Our simulations of different scenarios show that the loss of power because of such a misuse of the DeLong test leads to a conservative test for small and moderate effect sizes. This problem does not exist in cases of predictors that are associated with the outcome and for non-nested models. We suggest that for nested models, only the test of association be performed for the new predictors, and if the result is significant, change in AUC be estimated with an appropriate confidence interval, which can be based on the DeLong approach.
AUC; DeLong test; logistic regression; U-statistics; discrimination; risk prediction
Cardiovascular risk prediction functions offer an important diagnostic tool for clinicians and patients themselves. They are usually constructed with the use of parametric or semi-parametric survival regression models. It is essential to be able to evaluate the performance of these models, preferably with summaries that offer natural and intuitive interpretations. The concept of discrimination, popular in the logistic regression context, has been extended to survival analysis. However, the extension is not unique. In this paper, we define discrimination in survival analysis as the model’s ability to separate those with longer event-free survival from those with shorter event-free survival within some time horizon of interest. This definition remains consistent with that used in logistic regression, in the sense that it assesses how well the model-based predictions match the observed data. Practical and conceptual examples and numerical simulations are employed to examine four C statistics proposed in the literature to evaluate the performance of survival models. We observe that they differ in the numerical values and aspects of discrimination that they capture. We conclude that the index proposed by Harrell is the most appropriate to capture discrimination described by the above definition. We suggest researchers report which C statistic they are using, provide a rationale for their selection, and be aware that comparing different indices across studies may not be meaningful.
discrimination; risk function; censoring; AUC; concordance
Because of concerns about the safety and environmental impact of mercury, aneroid sphygmomanometers have replaced mercury-filled devices for blood pressure (BP) measurements. Despite this change, few studies have compared BP measurements between the 2 devices.
The SEARCH for Diabetes in Youth Study conducted a comparison of aneroid and mercury devices among 193 youth with diabetes (48% boys, aged 12.9 ± 3.7 years; 89% type 1). Statistical analyses included estimating Pearson correlation coefficients, Bland-Altman plots, paired t tests, and fitting regression models, both overall and stratified by age (<10 vs ≥10–18 years).
Mean mercury and aneroid systolic and diastolic BPs were highly correlated. For the entire group, there was no significant difference in mean systolic BP using the aneroid device, but there was a −1.53 ± 5.06 mm Hg difference in mean diastolic BP. When stratified by age, a lower diastolic BP (−1.78 ± 5.2 mm Hg) was seen in those ≥10 to 18 years using the aneroid device. No differences in systolic BP were observed, and there were no differences in BP by device in individuals <10 years. Regression analyses did not identify any explanatory variables.
Although a small discrepancy between diastolic BP measurements from aneroid versus mercury devices exists, this variation is unlikely to be clinically significant, suggesting that either device could be used in research or clinical settings.
pediatrics; blood pressure
Levels of four urinary F2-isoprostanes (F2-IsoPs) were examined in a large sample of the Insulin Resistance Atherosclerosis Study (IRAS) multiethnic cohort: 237 African Americans (AAs), 342 non-Hispanic Whites (NHWs), and 275 Hispanic Whites (HWs). F2-IsoP isomers – iPF2a-III, 2,3-dinor-iPF2a-III, iPF2a-VI, and 8,12-iso-iPF2a-VI – were measured in 854 urine samples using liquid chromatography with tandem mass spectrometry detection. In AAs, levels of all four F2-IsoPs were lower compared with NHWs and HWs (p-values < 0.05). When stratified by BMI, this gap was not observed among participants with normal BMI but appeared among overweight and increased among obese participants. Examining the slopes of the associations between BMI and F2-IsoPs showed no association between these variables among AAs (p-values > 0.2), and positive associations among Caucasians (p-values < 0.05). Taking into account that positive cross-sectional associations between systemic F2-IsoP levels and BMI have been consistently demonstrated in many study populations, the lack of such an association among AAs reveals a new facet of racial/ethnic differences in obesity-related risk profiles.
F2-isoprostanes; BMI; racial differences; epidemiology
Pompe disease is a rare metabolic myopathy for which disease-specific enzyme replacement therapy (ERT) has been available since 2006. ERT has shown efficacy concerning muscle strength and pulmonary function in adult patients. However, no data on the effect of ERT on the survival of adult patients are currently available. The aim of this study was to assess the effect of ERT on survival in adult patients with Pompe disease.
Data were collected as part of an international observational study conducted between 2002 and 2011, in which patients were followed on an annual basis. Time-dependent Cox’s proportional hazards models were used for univariable and multivariable analyses.
Overall, 283 adult patients with a median age of 48 years (range, 19 to 81 years) were included in the study. Seventy-two percent of patients started ERT at some time during follow-up, and 28% never received ERT. During follow-up (median, 6 years; range, 0.04 to 9 years), 46 patients died, 28 (61%) of whom had never received ERT. After adjustment for age, sex, country of residence, and disease severity (based on wheelchair and ventilator use), ERT was positively associated with survival (hazard ratio, 0.41; 95% CI, 0.19 to 0.87).
This prospective study was the first to demonstrate the positive effect of ERT on survival in adults with Pompe disease. Given the relatively recent registration of ERT for Pompe disease, these findings further support its beneficial impact in adult patients.
Pompe disease; Survival; Acid maltase deficiency; Lysosomal storage disease; Glycogen storage disease type II; Enzyme replacement therapy; Alglucosidase alfa
Stratification of individuals at risk for chronic kidney disease may allow optimization of preventive measures to reduce disease incidence and complications. We sought to develop a risk score that estimates an individual’s absolute risk of incident chronic kidney disease.
Framingham Heart Study participants free of baseline chronic kidney disease, who attended a baseline examination in 1995–1998 and follow-up in 2005–2008, were included in the analysis (n=2,490). Chronic kidney disease was defined as an estimated glomerular filtration rate <60 ml/min/1.73m2 using the Modification of Diet in Renal Disease (MDRD) equation. Participants were assessed for the development of chronic kidney disease at 10 years follow-up. Stepwise logistic regression was used to identify chronic kidney disease risk factors, and these were used to construct a risk score predicting 10-year chronic kidney disease risk. Performance characteristics were assessed using calibration and discrimination measures. The final model was externally validated in the bi-ethnic Atherosclerosis Risk in Communities (ARIC) Study (n=1,777).
There were 1,171 men and 1,319 women at baseline, and the mean age was 57.1 years. At follow-up, 9.2% (n=229) had developed chronic kidney disease. Age, diabetes, hypertension, baseline estimated glomerular filtration rate and albuminuria were independently associated with incident chronic kidney disease (p<0.05), and these covariates were incorporated into a risk function (c-statistic 0.813). In external validation in the ARIC study, the c-statistic was 0.79 in whites (n=1,353) and 0.75 in blacks (n=424).
Risk stratification for chronic kidney disease is achievable using a risk score derived from clinical factors that are readily accessible in primary care. The utility of this score in identifying individuals in the community at high risk of chronic kidney disease warrants further investigation.
Little is known about the familial aggregation of intermittent claudication (IC). Our objective was to examine whether parental IC increased adult offspring risk of IC independent of established cardiovascular risk factors. We evaluated Offspring cohort participants of the Framingham Heart Study (FHS) who were 30 years or older, cardiovascular disease (CVD) free, and had both parents enrolled in the FHS (n= 2970 unique participants, 53% women). Pooled proportional hazards regression was used to examine whether the 12 year risk for incident IC in offspring participants was associated with parental IC adjusting for age, sex, diabetes, smoking, systolic blood pressure, total cholesterol, high density lipoprotein (HDL) cholesterol, anti-hypertensive and lipid treatment. Among 909 person-exams in the parental IC history group and 5397 person-exams in the no parental IC history group there were 101 incident IC events (29 with parental IC history, 72 without parental IC history) during follow-up. Age and sex adjusted 12-year cumulative incidence rates per 1000 person-years were 5.08 (95% CI: 2.74; 7.33) and 2.34 (95% CI: 1.46; 3.19) in participants with and without parental IC history. Parental history of IC significantly increased the risk of incident IC in offspring (multivariable adjusted hazard ratio of 1.81, 95% CI 1.14, 2.88). The hazard ratio was unchanged with adjustment for occurrence of CVD (1.83, 95% CI 1.15, 2.91). In conclusion, IC in parents increases risk for IC in adult offspring independent of established risk factors. These data suggest a genetic component of peripheral artery disease and support future research into genetic causes.
claudication; peripheral artery disease; risk factors; family history