To compare typical age-related changes in activities of daily living (ADL) independence in stroke-free adults to long-term ADL trajectories before and after stroke.
Study Design, Setting, and Participants
Prospective, observational cohort of 18,441 Health and Retirement Study participants who were stroke-free in 1998 and followed through 2008 (average follow-up=7.9 years).
Strokes were assessed with self- or proxy-report of a doctor’s diagnosis and month/year of event. We used logistic regression to compare within-person changes in odds of self-reported independence in 5 ADLs among those who remained stroke free throughout follow-up (n=16,816); those who survived a stroke (n=1,208); and those who had a stroke and did not survive to participate in another interview (n=417). Models were adjusted for demographic and socioeconomic covariates.
Even prior to stroke, those who later developed stroke had significantly lower ADL independence and were experiencing faster independence losses, compared to similar aged individuals who remained stroke free. Of those who developed a stroke, survivors experienced slower loss of ADL independence compared to those who died. ADL independence declined at the time of stroke and decline continued afterwards.
Among adults at risk of stroke, disproportionate ADL limitations emerge well before stroke onset. Excess disability among stroke survivors should not be entirely attributed to effects of acute stroke or quality of acute stroke care. Although there are many possible causal pathways between ADL and stroke, the association may alternatively be non-causal. For example, ADL limitations may be a consequence of stroke risk factors (e.g., diabetes) or early cerebrovascular ischemia.
stroke; activities of daily living; older adults; disability; mortality; longitudinal
To determine whether a polygenic risk score for Alzheimer's disease (AD) predicts dementia probability and memory functioning in non-Hispanic black (NHB) and non-Hispanic white (NHW) participants from a sample not used in previous genome-wide association studies.
Non-Hispanic white and NHB Health and Retirement Study (HRS) participants provided genetic information and either a composite memory score (n = 10,401) or a dementia probability score (n = 7690). Dementia probability score was estimated for participants' age 65+ from 2006 to 2010, while memory score was available for participants age 50+. We calculated AD genetic risk scores (AD-GRS) based on 10 polymorphisms confirmed to predict AD, weighting alleles by beta coefficients reported in AlzGene meta-analyses. We used pooled logistic regression to estimate the association of the AD-GRS with dementia probability and generalized linear models to estimate its effect on memory score.
Each 0.10 unit change in the AD-GRS was associated with larger relative effects on dementia among NHW aged 65+ (OR = 2.22; 95% CI: 1.79, 2.74; P < 0.001) than NHB (OR=1.33; 95% CI: 1.00, 1.77; P = 0.047), although additive effect estimates were similar. Each 0.10 unit change in the AD-GRS was associated with a −0.07 (95% CI: −0.09, −0.05; P < 0.001) SD difference in memory score among NHW aged 50+, but no significant differences among NHB (β = −0.01; 95% CI: −0.04, 0.01; P = 0.546). [Correction added on 29 July 2014, after first online publication: confidence intervalshave been amended.] The estimated effect of the GRS was significantly smaller among NHB than NHW (P < 0.05) for both outcomes.
This analysis provides evidence for differential relative effects of the GRS on dementia probability and memory score among NHW and NHB in a new, national data set.
Dementia; genetics; race
To determine whether a polygenic risk score for Alzheimer's disease (AD) predicts dementia probability and memory functioning in non‐Hispanic black (NHB) and non‐Hispanic white (NHW) participants from a sample not used in previous genome‐wide association studies.
Non‐Hispanic white and NHB Health and Retirement Study (HRS) participants provided genetic information and either a composite memory score (n = 10,401) or a dementia probability score (n = 7690). Dementia probability score was estimated for participants' age 65+ from 2006 to 2010, while memory score was available for participants age 50+. We calculated AD genetic risk scores (AD‐GRS) based on 10 polymorphisms confirmed to predict AD, weighting alleles by beta coefficients reported in AlzGene meta‐analyses. We used pooled logistic regression to estimate the association of the AD‐GRS with dementia probability and generalized linear models to estimate its effect on memory score.
Each 0.10 unit change in the AD‐GRS was associated with larger relative effects on dementia among NHW aged 65+ (OR = 2.22; 95% CI: 1.79, 2.74; P < 0.001) than NHB (OR=1.33; 95% CI: 1.00, 1.77; P = 0.047), although additive effect estimates were similar. Each 0.10 unit change in the AD‐GRS was associated with a −0.07 (95% CI: −0.09, −0.06; P < 0.001) SD difference in memory score among NHW aged 50+, but no significant differences among NHB (β = −0.01; 95% CI: −0.03, 0.02; P = 0.546). The estimated effect of the GRS was significantly smaller among NHB than NHW (P < 0.05) for both outcomes.
This analysis provides evidence for differential relative effects of the GRS on dementia probability and memory score among NHW and NHB in a new, national data set.
Dementia; genetics; race
Retaining severely impaired individuals poses a major challenge in longitudinal studies of determinants of dementia or memory decline. In the Health and Retirement Study (HRS), participants complete direct memory assessments biennially until they are too impaired to complete the interview. Thereafter, proxy informants, typically spouses, assess the subject’s memory and cognitive function using standardized instruments. Because there is no common scale for direct memory assessments and proxy assessments, proxy reports are often excluded from longitudinal analyses. The Aging and Demographics Memory Study (ADAMS) implemented full neuropsychological exams on a subsample (n=856) of HRS participants, including respondents with direct or proxy cognitive assessments in the prior HRS core interview. Using data from ADAMS, we developed an approach to estimating a dementia probability and a composite memory score based on either proxy or direct assessments in HRS core interviews. The prediction model achieved a c-statistic of 94.3% for DSM diagnosed dementia in the ADAMS sample. We applied these scoring rules to HRS core sample respondents born 1923 or earlier (n=5,483) for biennial assessments 1995-2008. Compared to estimates excluding proxy respondents in the full cohort, incorporating information from proxy respondents increased estimated prevalence of dementia by 12 percentage points in 2008 (average age = 89) and suggested accelerated rates of memory decline over time.
Dementia; Memory decline; HRS; Cognitive assessments; Proxy informants
Prior research found that Americans born in 6 southeastern states (the AF-risk zone) had elevated risk of AF-related mortality, but no mechanisms were identified. We hypothesized the association between AF-related mortality and AF-risk zone birth is explained by indicators of childhood social disadvantage or adult risk factors. In 24,323 participants in the US Health and Retirement Study, we found that birth in the AF-risk zone was significantly associated with hazard of AF-related mortality. Among whites, the relationship was specific to place of birth, rather than place of adult residence. Neither paternal education nor subjectively assessed childhood SES predicted AF-related mortality. Conventional childhood and adult cardiovascular risk factors did not explain the association between place of birth and AF-related mortality.
Atrial Fibrillation; Mortality; Residence; Geographic; Lifecourse
To test whether the association between depressive symptoms and cardiovascular disease (CVD) mortality is stronger among blacks than whites. Design, Setting and Participants: 2,638 black and 15,132 white participants from a prospective, observational study of community-dwelling Health and Retirement Study participants (a nationally representative sample of U.S. adults age 50+). Average follow-up was 9.2 years. Outcome Measure: Cause of death (per ICD codes) and month of death were identified from National Death Index linkages.
The associations between elevated depressive symptoms and mortality from stroke, ischemic heart disease (IHD), or total CVD were assessed using Cox proportional hazards models to estimate adjusted hazard ratios (HRs). We used interaction terms for race by depressive symptoms to assess effect modification (multiplicative scale).
For both whites and blacks, depressive symptoms were associated with a significantly elevated hazard of total CVD mortality (whites: HR=1.46; 95% CI: 1.33, 1.61; blacks: HR=1.42, 95% CI: 1.10, 1.83). Adjusting for health and socioeconomic covariates, whites with elevated depressive symptoms had a 13% excess hazard of CVD mortality (HR=1.13, 95% CI: 1.03, 1.25) compared to whites without elevated depressive symptoms. The HR in blacks was similar, although the confidence interval included the null (HR=1.12, 95% CI: 0.86, 1.46). The hazard associated with elevated depressive symptoms did not differ significantly by race (p>0.15 for all comparisons). Patterns were similar in analyses restricted to respondents age 65+.
Clinicians should consider the depressive state of either black or white patients as a potential CVD mortality risk factor.
depression; stroke; cardiovascular disease; mortality; race
Many diseases commonly associated with aging are now thought to have social and physiologic antecedents in early life. Understanding how the timing of exposure to early life risk factors influences later-life health may illuminate mechanisms driving adult health inequalities and identify possible points for effective interventions. Recognizing chronic diseases as developing across the lifecourse also has implications for the conduct of research on adult risk factors for disease. We review alternative conceptual models that describe how the timing of risk factor exposure relates to the development of disease. We propose some expansions of lifecourse models to improve their relevance for research on adult chronic disease, using the relationship between education and adult cognitive decline and dementia as an example. We discuss the important implications each of the lifecourse conceptual models has on study design, analysis, and interpretation of research on aging and chronic diseases. We summarize several research considerations implied by the lifecourse framework, including: advantages of analyzing change in function rather than onset of impairment; the pervasive challenge of survivor bias; the importance of controlling for possible confounding by early life conditions; and the likely heterogeneity in responses of adults to treatment.
Lifecourse epidemiology; aging; chronic disease; models; dementia
Although Hispanics are the fastest growing ethnic group in the United States (US), relatively little is known about stroke risk in US Hispanics. We compare stroke incidence and socioeconomic predictors in US- and foreign-born Hispanics to patterns among non-Hispanic whites.
Health and Retirement Study participants aged 50+ free of stroke in 1998 (mean baseline age 66.3 years) were followed through 2008 for self- or proxy-reported first stroke (n=15,784; 1,388 events). We used discrete-time survival analysis to compare stroke incidence among US-born (including those who immigrated before age 7) and foreign-born Hispanics to incidence in non-Hispanic whites. We also examined childhood and adult socioeconomic characteristics as predictors of stroke among Hispanics, comparing effect estimates to those for non-Hispanic whites.
In age- and sex-adjusted models, US-born Hispanics had higher odds of stroke onset than non-Hispanic whites (OR=1.44, 95% CI: 1.08, 1.90), but these differences were attenuated and non-significant in models that controlled for childhood and adulthood socioeconomic factors (OR=1.07; 95% CI: 0.80, 1.42). In contrast, in models adjusted for all demographic and socioeconomic factors, foreign-born Hispanics had significantly lower stroke risk than non-Hispanic whites (OR=0.58, 95% CI: 0.41, 0.81). The impact of socioeconomic predictors on stroke did not differ between Hispanics and whites.
In this longitudinal national cohort, foreign-born Hispanics had lower incidence of stroke incidence than non-Hispanic whites and US-born Hispanics. Findings suggest that foreign-born Hispanics may have a risk factor profile that protects them from stroke as compared to other Americans.
stroke incidence; cardiovascular disease; social disparities; socioeconomic status; Hispanics; immigrants
Background and Purpose
Memory impairment is both a predictor and a consequence of stroke, but memory decline is common even in healthy elderly. We compared the long-term trajectory of memory functioning before and after stroke to memory change in stroke-free elderly.
Health and Retirement Study participants age 50+ (n=17,340) with no stroke history at baseline were interviewed biennially up to 10 years for first self- or proxy-reported stroke (n=1,574). Age-, sex-, and race- adjusted segmented linear regression models were used to compare annual rates of change in a composite memory score before and after stroke among three groups: 1,189 stroke survivors; 385 stroke decedents; and 15,766 cohort members who remained stroke-free.
Before stroke onset, individuals who later survived stroke had significantly (p<0.001) faster average annual rate of memory decline (-0.143 points/year) than those who remained stroke-free throughout follow-up (-0.101 points/year). Stroke decedents had even faster pre-stroke memory decline (-0.212 points/year). At stroke onset, memory declined an average of -0.369 points among stroke survivors, comparable to 3.7 years of age-related decline in stroke-free cohort members. Following stroke, memory in stroke survivors continued to decline at -0.142 points/year, similar to their pre-stroke rate (p=0.93). Approximately 50% of the memory difference between stroke survivors shortly after stroke and age-matched stroke-free individuals was attributable to pre-stroke memory.
Although stroke onset induced large decrements in memory, memory differences were apparent years before stroke. Memory declines before stroke, especially among those who did not survive the stroke, were faster than declines among stroke-free adults.
Memory functioning change; memory impairment; stroke
The authors examined the associations of participants’ and their parents’ educational levels with cognitive decline while addressing methodological limitations that might explain inconsistent results in prior work. Residents of Dijon, France (n = 4,480) 65 years of age or older who were enrolled between 1999 and 2001 were assessed using the Isaacs’ verbal fluency test, Benton Visual Retention Test, Trail Making Test B, and Mini-Mental State Examination up to 5 times over 9 years. The authors used random-intercepts mixed models with inverse probability weighting to account for differential survival (conditional on past performance) and quantile regressions to assess bias from measurement floors or ceilings. Higher parental educational levels predicted better average baseline performances for all tests but a faster average decline in score on the Isaacs’ test. Higher participant educational attainment predicted better baseline performances on all tests and slower average declines in Benton Visual Retention Test, Trail Making Test B, and Mini-Mental State Examination scores. Slope differences were generally small, and most were not robust to alternative model specifications. Quantile regressions suggested that ceiling effects might have modestly biased effect estimates, although the direction of this bias might depend on the test instrument. These findings suggest that the possible impacts of educational experiences on cognitive change are small, domain-specific, and potentially incorrectly estimated in conventional analyses because of measurement ceilings.
bias (epidemiology); cognitive disorders/dementia; cognitive reserve; cohort studies
Some evidence suggests that genetic polymorphisms in oxytocin pathway genes influence various social behaviors, but findings thus far have been mixed. Many studies have been based in small samples and there is possibility of publication bias. Using data from 2 large U.S. prospective cohorts with over 11,000 individuals, we investigated 88 SNPs in OXTR, AVPR1A, and CD38, in relation to social integration (measured as social connectedness in both binary and continuous forms and being continuously married). After correction for multiple testing only one SNP in CD38 (rs12644506) was significantly associated with social integration and that SNP predicted when using a dichotomized indicator of social connectedness (adjusted p=0.02), but not a continuous measure of social connectedness or the continuously married outcome. A significant gender-heterogeneous effect was identified in one OXTR SNP on dichotomized social connectedness; specifically, rs4686302 T allele was nominally associated with social connectedness in men, whereas the association direction was opposite in women (adjusted gender heterogeneity p=0.02). Furthermore, the rs53576 A allele was significantly associated with social connectedness only in women, and the effect magnitude was stronger in a dominant genetic model (adjusted p=0.003). In summary, our findings suggested that common genetic variants of OXTR, CD38, and AVPR1A are not associated with social integration as measured in this study using the simplified Berkman-Syme Social Network Index, but these findings and other work hint that effects may be modified by gender or other social experiences. Further work considering genetic pathways in relation to social integration may be more fruitful if these additional factors can be more comprehensively evaluated.
OXTR; CD38; AVPR1A; social integration; sex-specific; candidate gene
As with other instrumental variable (IV) analyses, Mendelian randomization (MR) studies rest on strong assumptions. These assumptions are not routinely systematically evaluated in MR applications, although such evaluation could add to the credibility of MR analyses. In this article, the authors present several methods that are useful for evaluating the validity of an MR study. They apply these methods to a recent MR study that used fat mass and obesity-associated (FTO) genotype as an IV to estimate the effect of obesity on mental disorder. These approaches to evaluating assumptions for valid IV analyses are not fail-safe, in that there are situations where the approaches might either fail to identify a biased IV or inappropriately suggest that a valid IV is biased. Therefore, the authors describe the assumptions upon which the IV assessments rely. The methods they describe are relevant to any IV analysis, regardless of whether it is based on a genetic IV or other possible sources of exogenous variation. Methods that assess the IV assumptions are generally not conclusive, but routinely applying such methods is nonetheless likely to improve the scientific contributions of MR studies.
causality; confounding factors; epidemiologic methods; instrumental variables; Mendelian randomization analysis
We hypothesized that patterns of elevated stroke mortality among those born in the US Stroke Belt (SB) states also prevailed for mortality related to all-cause dementia or Alzheimer Disease (AD). Cause specific mortality (contributing cause of death, including underlying cause cases) rates in 2000 for US-born African-Americans and whites aged 65–89 were calculated by linking national mortality records with population data based on race, sex, age, and birth state or state of residence in 2000. Birth in a SB state (North Carolina, South Carolina, Georgia, Tennessee, Arkansas, Mississippi, or Alabama) was cross-classified against SB residence at the 2000 Census. Compared to those who were not born in the SB, odds of all cause dementia mortality were significantly elevated by 29% for African-Americans and 19% for whites born in the SB. These patterns prevailed among individuals who no longer lived in the SB at death. Patterns were similar for AD-related mortality. Some non-SB states were also associated with significant elevations in dementia-related mortality. Dementia mortality rates follow geographic patterns similar to stroke mortality, with elevated rates among those born in the SB. This suggests important roles for geographically patterned childhood exposures in establishing cognitive reserve.
Dementia; cerebrovascular disease/stroke; geography; Stroke Belt; lifecourse; racial disparities
While research has suggested that being married may confer a health advantage, few studies to date have investigated the role of marital status in the development of type 2 diabetes. We examined whether men who are not married have increased risk of incident type 2 diabetes in the Health Professionals Follow-up Study. Men (n = 41,378) who were free of T2D in 1986, were followed for ≤22 years with biennial reports of T2D, marital status and covariates. Cox proportional hazard models were used to compare risk of incident T2D by marital status (married vs unmarried and married vs never married, divorced/separated, or widowed). There were 2,952 cases of incident T2D. Compared to married men, unmarried men had a 16% higher risk of developing T2D (95%CI:1.04,1.30), adjusting for age, family history of diabetes, ethnicity, lifestyle and body mass index (BMI). Relative risks (RR) for developing T2D differed for divorced/separated (1.09 [95%CI: 0.94,1.27]), widowed (1.29 [95%CI:1.06,1.57]), and never married (1.17 [95%CI:0.91,1.52]) after adjusting for age, family history of diabetes and ethnicity. Adjusting for lifestyle and BMI, the RR for T2D associated with widowhood was no longer significant (RR:1.16 [95%CI:0.95,1.41]). When allowing for a 2-year lag period between marital status and disease, RRs of T2D for widowers were augmented and borderline significant (RR:1.24 [95%CI:1.00,1.54]) after full adjustment. In conclusion, not being married, and more specifically, widowhood was more consistently associated with an increased risk of type 2 diabetes in men and this may be mediated, in part, through unfavorable changes in lifestyle, diet and adiposity.
Moving to Opportunity (MTO) was a social experiment to test how relocation to lower poverty neighborhoods influences low-income families. Using adolescent data from 4–7 year evaluations (aged 12–19, n=2829), we applied gender-stratified intent-to-treat and adherence-adjusted linear regression models, to test effect modification of MTO intervention effects on adolescent mental health. Low parental education, welfare receipt, unemployment and never-married status were not significant effect modifiers. Tailoring mobility interventions by these characteristics may not be necessary to alter impact on adolescent mental health. Because parental enrollment in school and teen parent status adversely modified MTO intervention effects on youth mental health, post-move services that increase guidance and supervision of adolescents may help support post-move adjustment.
adolescent mental health; housing mobility; randomized controlled trial; housing policy; neighborhood effects
Using a longitudinal cohort, we assessed the association between neighborhood disadvantage and incidence of poor health and function in three domains.
Over 4,000 enrollees aged 55–65 in the national Health and Retirement Study were assessed biennially from 1998 through 2006 for incidence of: fair/poor self-rated health (SRH), elevated depressive symptoms, and limitations in five basic Activities of Daily Living (disability). Each analysis was restricted to subjects without that condition in 1994 or 1996. Neighborhoods (census tracts, time-updated for moves), were considered disadvantaged if they fell below the 25th percentile in an index comprising 6 socioeconomic status indicators. Repeated measures logistic regressions, inverse probability weighted to account for individual confounders, selective survival, and loss to follow-up, were used to estimate odds ratios (ORs) for incidence of each outcome in the wave following exposure to disadvantaged neighborhood.
After covariate adjustment, neighborhood disadvantage predicted onset of fair/poor SRH (OR=1.32; 95% confidence interval 1.11, 1.57), but not disability (OR=0.98; 0.82, 1.16) or elevated depressive symptoms (OR=0.98; 0.83, 1.16).
Results confirmed prior findings that neighborhood disadvantage predicts SRH in a longitudinal context, but did not support an association between neighborhood disadvantage and onset of disability or elevated depressive symptoms.
Neighborhoods; Self-Rated Health; Activities of Daily Living; Depressive Symptoms; Longitudinal Survey; Inverse Probability Weights; Marginal Structural Models
To examine how different activities performed during employment gaps are associated with later cognitive function and change.
Five cognitive measures were used to indicate cognitive impairment of 18,259 respondents to the Survey of Health, Ageing, and Retirement in Europe (age 50-73) in 2004/5 or 2006/7. Using complete employment histories, employment gaps of six months or more between ages 25 and 65 were identified.
Controlling for early-life socioeconomic status, school performance, and education, higher risk of cognitive impairment was associated with employment gaps described as unemployment (Odds Ratio [OR] = 1.18, 95 % Confidence Interval [CI] 1.04, 1.35) and sickness (OR = 1.78, 95 % CI 1.52, 2.09). In contrast, lower risk of cognitive impairment was associated with employment gaps described as training (OR = 0.73, 95 % CI 0.52, 1.01) or maternity (OR = 0.65, 95 % CI 0.57, 0.79). In longitudinal mixed effects models, training and maternity spells were associated with lower two-year aging-related cognitive decline.
Periods away from work described as unemployment or sickness are associated with lower cognitive function, whereas maternity and training spells are associated with better late-life cognitive function. Both causation and selection mechanisms may explain these findings.
cognition; cognitive reserve; employment status
Understanding how the timing of exposure to the US Stroke Belt (SB) influences stroke risk may illuminate mechanisms underlying the SB phenomenon and factors influencing population stroke rates.
Stroke mortality rates for United States–born black and white people aged 30–80 years were calculated for 1980, 1990, and 2000 for strata defined by birth state, state of adult residence, race, sex, and birth year. Four SB exposure categories were defined: born in a SB state (North Carolina, South Carolina, Georgia, Tennessee, Arkansas, Mississippi, or Alabama) and lived in the SB at adulthood; non-SB born but SB adult residence; SB-born but adult residence outside the SB; and did not live in the SB at birth or in adulthood (reference group). We estimated age-, sex-, and race-adjusted odds ratios for stroke mortality associated with timing of SB exposure.
Elevated stroke mortality was associated with both SB birth and, independently, SB adult residence, with the highest risk among those who lived in the SB at birth and adulthood. Compared to those living outside the SB at birth and adulthood, odds ratios for SB residence at birth and adulthood for black subjects were 1.55 (95% confidence interval 1.28, 1.88) in 1980, 1.47 (1.31, 1.65) in 1990, and 1.34 (1.22, 1.48) in 2000. Comparable odds ratios for white subjects were 1.45 (95% confidence interval 1.33, 1.58), 1.29 (1.21, 1.37), and 1.34 (1.25, 1.44). Patterns were similar for every race, sex, and age subgroup examined.
Stroke Belt birth and adult residence appear to make independent contributions to stroke mortality risk.
= confidence interval;
= Stroke Belt.
To assess whether there are differences in the strength of association with incident stroke for specific periods of life in the Stroke Belt (SB).
The risk of stroke was studied in 24,544 black and white stroke-free participants, aged 45+, in the Reasons for Geographic and Racial Differences in Stroke study, a national population-based cohort enrolled 2003–2007. Incident stroke was defined as first occurrence of stroke over an average 5.8 years of follow-up. Residential histories (city/state) were obtained by questionnaire. SB exposure was quantified by combinations of SB birthplace and current residence and proportion of years in SB during discrete age categories (0–12, 13–18, 19–30, 31–45, last 20 years) and entire life. Proportional hazards models were used to establish association of incident stroke with indices of exposure to SB, adjusted for demographic, socioeconomic (SES), and stroke risk factors.
In the demographic and SES models, risk of stroke was significantly associated with proportion of life in the SB and with all other exposure periods except birth, ages 31–45, and current residence. The strongest association was for the proportion of the entire life in SB. After adjustment for risk factors, the risk of stroke remained significantly associated only with proportion of residence in SB in adolescence (hazard ratio 1.17, 95% confidence interval 1.00–1.37).
Childhood emerged as the most important period of vulnerability to SB residence as a predictor of future stroke. Improvement in childhood health circumstances should be considered as part of long-term health improvement strategies in the SB.
Some prior studies found excess stroke rates among blacks persisted after adjustment for socioeconomic status (SES), fueling speculation regarding racially patterned genetic predispositions to stroke. Prior research was hampered by incomplete SES assessments, without measures of childhood conditions or adult wealth. We assess the role of lifecourse SES in explaining stroke risk and stroke disparities.
Health and Retirement Study participants age 50+ (n=20,661) were followed on average 9.9 years for self or proxy reported first stroke (2,175 events). Childhood social conditions (southern birthstate, parental SES, self-reported fair/poor childhood health, and attained height), adult SES (education, income, wealth, and occupational status) and traditional cardiovascular risk factors were used to predict first stroke onset using Cox proportional hazards models.
Blacks had 48% higher risk of first stroke incidence than whites (95% CI: 1.33, 1.65). Childhood conditions predicted stroke risk in both blacks and whites, independently of adult SES. Adjustment for both childhood social conditions and adult SES measures attenuated racial differences to marginal significance (HR=1.13; 1.00, 1.28).
Childhood social conditions predict adult stroke risk in American blacks and whites. Additional adjustment for adult SES, in particular wealth, nearly eliminated the disparity in stroke risk between blacks and whites.
stroke; lifecourse; social class; aged; United States; racial disparities
Few prospective studies have investigated the relationship between spousal cigarette smoking and the risk of incident stroke.
Stroke-free participants in the U.S.-based Health and Retirement Study (HRS) aged ≥50 years and married at baseline (n=16,225) were followed, on average, 9.1 years between 1992 and 2006) for proxy or self-report of first stroke (1130 events). Participants were stratified by gender and own smoking status (never-smokers, former smokers, or current smokers), and the relationship assessed between the spouse’s smoking status and the risk of incident stroke. Analyses were conducted in 2007 with Cox proportional hazards models. All models were adjusted for age; race; Hispanic ethnicity; Southern birthstate; parental education; paternal occupation class; years of education; baseline income; baseline wealth; obesity; overweight; alcohol use; and diagnosed hypertension, diabetes, or heart disease.
Having a spouse who currently smoked was associated with an increased risk of first stroke among never-smokers (hazard ratio =1.42, 95% CI=1.05, 1.93) and former smokers (hazard ratio=1.72, 95% CI=1.33, 2.22). Former smokers married to current smokers had a stroke risk similar to respondents who themselves smoked.
Spousal smoking poses important stroke risks for never-smokers and former smokers. The health benefits of quitting smoking likely extend to both the individual smoker and his or her spouse.
Little is known about the possible effects of social resources on stroke survivors' level and change in cognitive outcomes. Understanding this association may help us identify strategies to improve stroke recovery and help elucidate the etiology of dementia.
We examined the relationship of social ties and social support to cognitive function and cognitive change 6 months after stroke. Participants in the Families in Recovery from Stroke Trial (FIRST) (n = 272) were interviewed approximately 17 days (baseline) and 6 months (follow-up) after stroke. Cognition was assessed with the Mini Mental State Examination (MMSE) and a summary battery of 7 neuropsychological tests. Median-based regression was used to model cognitive outcomes by level of baseline intimate, personal and organizational social ties and received emotional and instrumental support.
Baseline social ties and emotional support independently predicted 6-month Cognitive Summary Scores. Emotional support also predicted greater improvements in Cognitive Summary Scores from baseline to the 6-month follow-up. No other social exposures predicted improvements in the MMSE or the Cognitive Summary.
Our results suggest that emotional support may promote cognitive resilience while social ties provide cognitive reserve that protects against impaired cognition after stroke. Social ties did not predict cognitive recovery however, so reverse causation cannot be ruled out.
Social support; Brain infarction; Cognitive reserve; Cognitive disorders; Neuropsychological tests; Causality
Existing evidence on stress and asthma prevalence has disproportionately focused on pregnancy and postpregnancy early life stressors, largely ignoring the role of childhood adversity as a risk factor. Childhood adversity (neglect, stressful living conditions and maltreatment) may influence asthma prevalence through mechanisms on the hypothalamic-pituitary axis.
Data from the Center for Disease Control's (CDC's) Behavioral Risk Factor Surveillance System (BRFSS) surveys were used to examine cross-sectional associations of adverse childhood experiences (ACE) with lifetime and current asthma prevalence. Information on childhood adversity was available from 84 786 adult respondents in 10 US states. Poisson regression models (with robust SE) were used to estimate prevalence ratios (PRs) relating overall ACE score and dimensions of exposure ACE to asthma prevalence, adjusting for socioeconomic status.
Greater ACE was associated with a higher prevalence of asthma (adjusted PRcat 4=1.78 (95% CI 1.69 to 1.87), adjusted PRcat 1=1.21 (95% CI 1.16 to 1.27)). Reported experiences of sexual abuse (adjusted PR=1.48* (1.42 to 1.55)) and physical abuse (adjusted PR=1.38* (1.33 to 1.43)) were associated with a higher asthma prevalence. No clear socioeconomic gradient was noted, but those reporting lowest education and income levels reported high rates of asthma and adversity. Sensitivity analyses indicated that ACE exposures were interrelated.
Report of childhood adversity predicts asthma prevalence among US adults. Frameworks for asthma prevention need to recognise and integrate aspects related to childhood adversity. Further investigation into specific time periods of exposure would provide meaningful inferences for interventions.
Asthma; Asthma Epidemiology
Despite moderate heritability estimates for depression-related phenotypes, few robust genetic predictors have been identified. Potential explanations for this discrepancy include the use of phenotypic measures taken from a single time point, rather than integrating information over longer time periods via multiple assessments, and the possibility that genetic risk is shaped by multiple loci with small effects.
We developed a 14-year long-term average depression measure based on 14 years of follow-up in the Nurses' Health Study (NHS; N = 6989 women). We estimated polygenic scores (PS) with internal whole-genome scoring (NHS-GWAS-PS). We also constructed PS by applying two external PS weighting algorithms from independent samples, one previously shown to predict depression (GAIN-MDD-PS) and another from the largest genome-wide analysis currently available (PGC-MDD-PS). We assessed the association of all three PS with our long-term average depression phenotype using linear, logistic, and quantile regressions.
In this study, the three PS approaches explained at most 0.2% of variance in the long-term average phenotype. Quantile regressions indicated PS had larger impacts at higher quantiles of depressive symptoms. Quantile regression coefficients at the 75th percentile were at least 40% larger than at the 25th percentile in all three polygenic scoring algorithms. The interquartile range comparison suggested the effects of PS significantly differed at the 25th and 75th percentiles of the long-term depressive phenotype for the PGC-MDD-PS (P = 0.03), and this difference also reached borderline statistical significance for the GAIN-MDD-PS (P = 0.05).
Integrating multiple phenotype assessments spanning 14 years and applying different polygenic scoring approaches did not substantially improve genetic prediction of depression. Quantile regressions suggested the effects of PS may be largest at high quantiles of depressive symptom scores, presumably among people with additional, unobserved sources of vulnerability to depression.
Depression; GWAS; long-term cumulative phenotype; polygenic score; quantile regression