We evaluated Toll-like receptor (TLR) function in primary human dendritic cells from 104 young (age 21–30) and older (≥ 65 years) individuals. We used multicolor flow cytometry and intracellular cytokine staining of myeloid (mDC) and plasmacytoid (pDC) DCs and found substantial decreases in older, compared to young individuals in TNF-α, IL-6 and/or IL-12 (p40) production in mDCs and in TNF-α and IFN-α production in pDCs in response to TLR1/2, TLR2/6, TLR3, TLR5, and TLR8 engagement in mDCs and TLR7 and TLR9 in pDCs. These differences were highly significant after adjustment for heterogeneity between young and older groups (e.g. gender, race, body mass index [BMI], number of comorbid medical conditions) using mixed effect statistical modeling. Studies of surface and intracellular expression of TLR proteins, and of TLR gene expression in purified mDCs and pDCs revealed potential contributions for both transcriptional and post-transcriptional mechanisms in these age-associated effects. Moreover, intracellular cytokine production in the absence of TLR ligand stimulation was elevated in cells from older, compared to young individuals, suggesting a dysregulation of cytokine production that may limit further activation by TLR engagement. Our results provide evidence for immunosenescence in dendritic cells; notably, defects in cytokine production were strongly associated with poor antibody response to influenza immunization, a functional consequence of impaired TLR function in the aging innate immune response.
Many clinical trials are designed to test an intervention arm against a control arm wherein all subjects are equally eligible for all interventional components. Factorial designs have extended this to test multiple intervention components and their interactions. A newer design referred to as a ‘standardly-tailored’ design, is a multicomponent interventional trial that applies individual interventional components to modify risk factors identified a priori and tests whether health outcomes differ between treatment arms. Standardly-tailored designs do not require that all subjects be eligible for every interventional component. Although standardly-tailored designs yield an estimate for the net effect of the multicomponent intervention, it has not yet been shown if they permit separate, unbiased estimation of individual component effects. The ability to estimate the most potent interventional components has direct bearing on conducting second stage translational research.
We present statistical issues related to the estimation of individual component effects in trials of geriatric conditions using factorial and standardly-tailored designs. The medical community is interested in second stage translational research involving the transfer of results from a randomized clinical trial to a community setting. Before such research is undertaken, main effects and synergistic and or antagonistic interactions between them should be identified. Knowledge of the relative strength and direction of the effects of the individual components and their interactions facilitates the successful transfer of clinically significant findings and may potentially reduce the number of interventional components needed. Therefore the current inability of the standardly-tailored design to provide unbiased estimates of individual interventional components is a serious limitation in their applicability to second stage translational research.
We discuss estimation of individual component effects from the family of factorial designs and this limitation for standardly-tailored designs. We use the phrase ‘factorial designs’ to describe full-factorial designs and their derivatives including the fractional factorial, partial factorial, incomplete factorial and modified reciprocal designs. We suggest two potential directions for designing multicomponent interventions to facilitate unbiased estimates of individual interventional components.
Full factorial designs and their variants are the most common multicomponent trial design described in the literature and differ meaningfully from standardly-tailored designs. Factorial and standardly-tailored designs result in similar estimates of net effect with different levels of precision. Unbiased estimation of individual component effects from a standardly-tailored design will require new methodology.
Although clinically relevant in geriatrics, previous applications of standardly-tailored designs have not provided unbiased estimates of the effects of individual interventional components.
Future directions to estimate individual component effects from standardly-tailored designs include applying D-optimal designs and creating independent linear combinations of risk factors analogous to factor analysis.
Methods are needed to extract unbiased estimates of the effects of individual interventional components from standardly-tailored designs.
A three-component tailored psychosocial 12-month assessor-blinded randomized controlled trial to reduce depression in people with dementia (PWD) and carers was conducted.
A total of 230 home-dwelling dyads of PWD and their carers were randomized to usual care or intervention consisting of three components over 12 months. Primary outcomes were the difference between the baseline and 12-month score on the Cornell Scale of Depression in Dementia (CSDD) in the PWD and on the Geriatric Depression Scale (GDS) in the carers.
The intent-to-treat difference between the baseline and 12-month change score was not significant between the intervention and control groups for the CSDD (p = 0.95) or GDS (p = 0.82).
The trial did not show a significant difference between usual care and the intervention on depressive symptoms in PWD or their family caregivers.
© 2013 S. Karger AG, Basel
Dementia; Caregiver; Psychosocial intervention; Depression; Clinical trial
To determine empirically the diseases contributing most commonly and strongly to death in older adults, accounting for coexisting diseases.
Twenty two thousand eight hundred ninety Medicare Current Beneficiary Survey participants, a national representative sample of Medicare beneficiaries, enrolled during 2002 – 2006.
Chronic and acute diseases were ascertained from Medicare claims data. Diseases contributing to death during follow-up were identified empirically via regression models among all diseases with a frequency of ≥ 1% and hazard ratio for death of > 1. The additive contributions of these diseases, adjusting for co-existing diseases, were calculated using a longitudinal extension of average attributable fraction; 95% confidence intervals were estimated from bootstrapping.
Fifteen diseases and acute events contributed significantly to death, together accounting for nearly 70% of death. Heart failure (20.0%), dementia (13.6%), chronic lower respiratory disease (12.4%), and pneumonia (5.3%) made the largest contributions to death. Cancers, including lung, colorectal, lymphoma, and head and neck, together contributed to 5.6% of death. The other disease and events included acute kidney injury, stroke, septicemia, liver disease, myocardial infarction, and unintentional injuries.
The extent of the contribution of some diseases such as dementia and respiratory disease to death in older adults may be underappreciated, while the contribution of other diseases may be overestimated, with methods that focus on determining a single underlying cause. Current conceptualization of a single underlying cause may not account adequately for the contribution to death of coexisting diseases experienced by older adults.
death; coexisting diseases; multiple chronic conditions
Researchers have often used rather simple approaches to analyze repeated time-to-event health conditions that either examine time to the first event or treat multiple events as independent. More sophisticated models have been developed, although previous applications have focused largely on such outcomes having continuous risk intervals. Limitations of applying these models include their difficulty in implementation without careful attention to forming the data structures.
We first review time-to-event models for repeated events that are extensions of the Cox model and frailty models. Next, we develop a way to efficiently set up the data structures with discontinuous risk intervals for such models, which are more appropriate for many applications than the continuous alternatives. Finally, we apply these models to a real dataset to investigate the effect of gender on functional disability in a cohort of older persons. For comparison, we demonstrate modeling time to the first event.
The GEE Poisson, the Cox counting process, and the frailty models provided similar parameter estimates of gender effect on functional disability, that is, women had increased risk of bathing disability and other disability (disability in walking, dressing, or transferring) as compared to men. These results, especially for other disability, were quite different from those provided by an analysis of the first-event outcomes. However, the effect of gender was no longer significant in the counting process model fully adjusted for covariates.
Modeling time to the first event only may not be adequate. After properly setting up the data structures, repeated event models that account for the correlation between multiple events within subjects, can be easily implemented with common statistical software packages.
recurrent event; modeling; data structure; disability
Studies addressing immunosenescence in the immune system have expanded to focus on the innate as well as the adaptive responses. In particular, aging results in alterations in the function of Toll-like receptors (TLRs), the first described pattern recognition receptor family of the innate immune system. Recent studies have begun to elucidate the consequences of aging on TLR function in human cohorts and add to existing findings performed in animal models. In general, these studies show that human TLR function is impaired in the context of aging, and in addition there is evidence for inappropriate persistence of TLR activation in specific systems. These findings are consistent with an overarching theme of age-associated dysregulation of TLR signaling that likely contributes to the increased morbidity and mortality from infectious diseases found in geriatric patients.
We decomposed the total effect of coexisting diseases on a timed occurrence of an adverse outcome into additive effects from individual diseases.
In a cohort of older adults enrolled in the Precipitating Events Project in New Haven County, Connecticut, we assessed a longitudinal extension of the average attributable fraction method (LE-AAF) to estimate the additive and order-free contributions of multiple diseases to the timed occurrence of a health outcome, with right censoring, which may be useful when relationships among diseases are complex. We partitioned the contribution to death into additive LE-AAFs for multiple diseases.
The onset of heart failure and acute episodes of pneumonia during follow-up contributed the most to death, with the overall LE-AAFs equal to 13.0% and 12.1%, respectively. The contribution of preexisting diseases decreased over the years, with a trend of increasing contribution from new onset of diseases.
LE-AAF can be useful for determining the additive and order-free contribution of individual time-varying diseases to a time-to-event outcome.
To determine the priority that older adults with coexisting hypertension and fall risk give to optimizing cardiovascular outcomes versus fall- and medication symptom-related outcomes.
One hundred twenty-three cognitively intact persons aged 70 and older with hypertension and fall risk.
Discrete choice task was used to elicit the relative importance placed on reducing the risk of three outcomes: cardiovascular events, serious fall injuries, and medication symptoms. Risk estimates with and without antihypertensive medications were obtained from the literature. Participants chose between 11 pairs of options that displayed lower risks for one or two outcomes and a higher risk for the other outcome(s), versus the reverse. Results were used to calculate relative importance scores for the three outcomes. These scores, which sum to 100, reflect the relative priority participants placed on the difference between the risk estimates of each outcome.
Sixty-two participants (50.4%) placed greater importance on reducing risk of cardiovascular events than reducing risk of the combination of fall injuries and medication symptoms; 61 participants did the converse. A lower percentage of participants with chronic obstructive pulmonary disease (P =.02), unsteadiness (P =.02), functional dependency (P =.04), lower cognition (P =.02) and depressive symptoms (P =.03) prioritized cardiovascular outcomes over fall injuries and medication symptoms than did participants without these characteristics.
Interindividual variability in the face of competing outcomes supports individualizing decision-making to individual priorities. In the current example, this may mean forgoing antihypertensive medications or compromising on blood pressure reduction for some individuals.
competing outcomes; fall injuries; hypertension; patient priorities
Frailty among older persons is a dynamic process, characterized by frequent transitions between frailty states over time. We performed a prospective longitudinal study to evaluate the relationship between intervening hospitalizations and these transitions.
We studied 754 nondisabled community-living persons, aged 70 years or older. Frailty, assessed every 18 months for 108 months, was defined on the basis of muscle weakness, exhaustion, low physical activity, shrinking, and slow walking speed. Participants were classified as frail if they met three or more of these criteria, prefrail if they met one or two of the criteria, or nonfrail if they met none of the criteria. Hospitalizations were ascertained every month for a median of 108 months.
The exposure rates (95% confidence interval) of hospitalization per 1,000 months, based on frailty status at the start of each 18-month interval, were 19.7 (16.2–24.0) nonfrail, 32.9 (29.8–36.2) prefrail, and 57.2 (52.9–63.1) frail. The likelihood of transitioning from states of greater frailty to lesser frailty (ie, recovering) was consistently lower based on exposure to intervening hospitalizations, with adjusted hazard ratios per each hospitalization ranging from 0.46 (95% confidence interval: 0.21–1.03) for the transition from frail to nonfrail states to 0.52 (95% confidence interval: 0.42–0.65) for the transition from prefrail to nonfrail states. Hospitalization had more modest and less consistent effects on transitions from states of lesser frailty to greater frailty. Nonetheless, transitions from nonfrail to frail states were uncommon in the absence of a hospitalization.
Recovery from prefrail and frail states is substantially diminished by intervening hospitalizations. These results provide additional evidence highlighting the adverse consequences of hospitalization in older persons.
Frailty; Hospitalization; Longitudinal study
A quantitative framework to assess harms and benefits of candidate medications in the context of drugs that a patient is already taking is proposed.
Probabilities of harms and benefits of a given medication are averaged to yield a utility value. The utility values of all medications under consideration are combined as a geometric mean to yield an overall measure of favorability. The grouping of medications yielding the highest favorability value is chosen.
Five examples of choosing between widely used candidate medications demonstrate the feasibility of the proposed framework.
The framework proposed provides a simple method for considering the trade-offs involved in prescribing multiple medications. It can be adapted to include additional parameters representing severity of condition, prioritization of outcomes, patient preferences, dosages, and medication interactions. Inconsistent reporting in the medical literature of data about benefits and harms of medications, dosages, and interactions constitutes its primary limitation.
adverse effect; utility function; aging; trade-offs; multiple medications
Falling is a common and morbid condition among elderly persons. Effective strategies to prevent falls have been identified but are underutilized.
Using a nonrandomized design, we compared rates of injuries from falls in a region of Connecticut where clinicians had been exposed to interventions to change clinical practice (intervention region) and in a region where clinicians had not been exposed to such interventions (usual-care region). The interventions encouraged primary care clinicians and staff members involved in home care, outpatient rehabilitation, and senior centers to adopt effective risk assessments and strategies for the prevention of falls (e.g., medication reduction and balance and gait training). The outcomes were rates of serious fall-related injuries (hip and other fractures, head injuries, and joint dislocations) and fall-related use of medical services per 1000 person-years among persons who were 70 years of age or older. The interventions occurred from 2001 to 2004, and the evaluations took place from 2004 to 2006.
Before the interventions, the adjusted rates of serious fall-related injuries (per 1000 person-years) were 31.2 in the usual-care region and 31.9 in the intervention region. During the evaluation period, the adjusted rates were 31.4 and 28.6, respectively (adjusted rate ratio, 0.91; 95% Bayesian credibility interval, 0.88 to 0.94). Between the preintervention period and the evaluation period, the rate of fall-related use of medical services increased from 68.1 to 83.3 per 1000 person-years in the usual-care region and from 70.7 to 74.2 in the intervention region (adjusted rate ratio, 0.89; 95% credibility interval, 0.86 to 0.92). The percentages of clinicians who received intervention visits ranged from 62% (131 of 212 primary care offices) to 100% (26 of 26 home care agencies).
Dissemination of evidence about fall prevention, coupled with interventions to change clinical practice, may reduce fall-related injuries in elderly persons.
As the heart failure population continues to age, disability is becoming an increasingly important issue. Our objective was to identify risk factors for the onset of disability in activities of daily living among older persons with heart failure.
The study population included participants with newly diagnosed heart failure from the Cardiovascular Health Study, a longitudinal study of community-living, older persons. Data were collected through annual examinations. Cox regression modeling was used to examine associations between time-dependent predictors and onset of disability.
Of 461 participants newly diagnosed with heart failure (mean age 78.7 [sd 5.89]), 23% subsequently developed disability. The first year after heart failure diagnosis was the period of greatest risk for onset of disability (chi-square P value <0.001). Factors that were independently associated with disability included: impaired gait speed (HR 2.29, 95% CI 1.34–3.90); impaired cognition (HR 1.87, 95% CI 1.14–3.05); and depressive symptoms (HR 1.72, 95% CI 1.04–2.83).
Onset of disability is a common occurrence among older persons newly diagnosed with heart failure. Risk factors for onset of disability in this population are potentially modifiable, and should be routinely assessed in an effort to reduce disability in this growing population.
Epidemiology; Geriatric conditions; Activities of daily living; Functional status
Relatively little is known about why older persons develop long-term disability in community mobility.
To identify the risk factors and precipitants for long-term disability in walking ¼ mile and driving a car, respectively.
Prospective cohort study from March 1998 to December 2009.
Greater New Haven, Connecticut.
641 persons, 70+ years, who were active drivers or nondisabled in walking ¼ mile. Persons who were physically frail were oversampled.
Candidate risk factors were assessed every 18 months. Disability in community mobility and exposure to potential precipitants, which included illnesses/injuries leading to hospitalization or restricted activity, respectively, were assessed every month. Disability lasting ≥6 consecutive months was considered long term.
318 (56.0%) and 269 (53.1%) participants developed long-term disability in walking and driving, respectively. Seven risk factors were independently associated with walking disability, while eight were associated with driving disability; the strongest associations for each outcome were found for older age and lower score on the Short Physical Performance Battery. The effects of the precipitants on long-term disability were large, with multivariable hazard ratios for each outcome greater than 6 for hospitalization and 2.4 for restricted activity. The largest differences in absolute risk were generally observed for participants who had a specific risk factor and were subsequently hospitalized.
The observed associations may not be causal. The severity of precipitants was not assessed. The effect of the precipitants may have been underestimated because their exposure after the initial onset of disability was not evaluated.
Long-term disability in community mobility is common among older persons. Multiple risk factors, together with subsequent precipitants, greatly increase the likelihood of developing long-term mobility disability.
Primary Funding Source
National Institute on Aging.
Our objective was to determine the relative importance of geriatric impairments (including those in muscle strength, physical capacity, cognition, vision, hearing and psychological status) and chronic diseases in predicting subsequent functional disability in longitudinal analyses.
We analyzed longitudinal data from the Cardiovascular Health Study. Multivariable Cox hazards regression modeling was used to analyze associations between time-dependent predictors and onset of disability in Activities of Daily Living (ADL) and mobility.
5888 community-dwelling elderly persons were followed for up to seven years.
Data were collected annually through in-person examinations.
ADL disability developed in 15% of participants and mobility disability in 30%. A single multivariable model was developed that included demographics, marital status, body mass index, and number of impairments and diseases. The hazard ratios of having 1, 2, and ≥ 3 geriatric impairments (compared with none) for the outcome of ADL disability were 2.12 (95% CI 1.63–2.75), 4.25 (3.30–5.48), and 7.87 6.10–10.17), respectively, and for having 1, 2, and ≥ 3 chronic diseases were 1.75 (1.41–2.19), 2.45 (1.95–3.07), and 3.26 (2.53–4.19), respectively. Similarly, the hazard ratios of having 1, 2, and ≥ 3 impairments for the outcome of mobility disability were 1.48 (1.27–1.73), 2.08 (1.77–2.45), and 3.70 (3.09–4.42), and for having 1, 2, and ≥ 3 diseases were 2.06 (1.76–2.40), 2.80 (2.36–3.31), and 4.20 (3.44–5.14).
As compared with number of chronic diseases, the number of geriatric impairments was more strongly associated with subsequent ADL disability, and nearly as strongly associated with the subsequent mobility disability.
geriatric impairments; disability; epidemiology
The medical and personal circumstances of older persons present challenges for designing and analyzing clinical research studies in which they participate. These challenges presented by elderly study samples are not unique but they are sufficiently distinctive to warrant deliberate and systematic attention. Their distinctiveness originates in the multifactorial etiologies of geriatric health syndromes and the multiple morbidities accruing with aging at the end of life. The objective of this article is to identify a set of statistical challenges arising in research with older persons that should be considered conjointly in the practice of clinical research and that should be addressed systematically in the training of biostatisticians intending to work with gerontologists, geriatricians, and older study participants. The statistical challenges include design and analytical strategies for multicomponent interventions, multiple outcomes, state transition models, floor and ceiling effects, missing data, and mixed methods. The methodological and pedagogical themes of this article will be integrated by a description of a proposed subdiscipline of “gerontologic biostatistics” and supported by the introduction of new set of statistical resources for researchers working in this area. These conceptual and methodological resources have been developed in the context of several collaborating Claude D. Pepper Older Americans Independence Centers.
clinical research; statistics; aging; study design
Disability among older persons is a complex and highly dynamic process, with high rates of recovery and frequent transitions between states of disability. The role of intervening illnesses and injuries (i.e. events) on these transitions is uncertain.
To evaluate the relationship between intervening events and transitions among states of no disability, mild disability, severe disability and death, and to determine the association of physical frailty with these transitions.
Design, Setting, and Participants
Prospective cohort study, conducted in greater New Haven, Connecticut, from March 1998 to December 2008, of 754 community-living persons, aged 70 years or older, who were nondisabled at baseline in four essential activities of daily living: bathing, dressing, walking, and transferring. Telephone interviews were completed monthly for more than 10 years to assess disability and ascertain exposure to intervening events, which included illnesses and injuries leading to either hospitalization or restricted activity. Physical frailty (defined as gait speed >10 seconds on the rapid gait test) was assessed every 18 months through 108 months.
Main Outcome Measure
Transitions between no disability, mild disability, and severe disability, and 3 transitions from each of these states to death, were evaluated each month.
Hospitalization was strongly associated with 8 of the 9 possible transitions, with increased multivariable hazard ratios (HR) as high as 168 (95% confidence interval [CI], 118–239) for the transition from no disability to severe disability and decreased HRs as low as 0.41 (95% CI, 0.30–0.54) for the transition from mild disability to no disability. Restricted activity also increased the likelihood of transitioning from no disability to both mild and severe disability (HR [CI]: 2.59 [2.23–3.02] and 8.03 [5.28–12.21]), respectively, and from mild disability to severe disability (1.45 [1.14–1.84]), but was not associated with recovery from mild or severe disability. For all nine of the transitions, the presence of physical frailty accentuated the associations of the intervening events. For example, the absolute risk of transitioning from no disability to mild disability within one month after hospitalization for frail individuals was 12.4% (95% CI, 12.1%–12.7%) vs 4.9% (4.7%–5.1%) for non-frail individuals. Among the possible reasons for hospitalization, fall-related injury conferred the highest likelihood of developing new or worsening disability.
Among older persons, particularly those who were physically frail, intervening illnesses and injuries greatly increased the likelihood of developing new or worsening disability. Only the most potent events, i.e. those leading to hospitalization, reduced the likelihood of recovery from disability.
We present a case study using a multilevel modeling approach to determine whether depressive symptoms are affected by genetic factors. Existing studies examining this question have focused on twins. The present study built on the literature by conducting a preliminary study of the heritability of depressive symptoms within extended families. At the same time, this study assessed the need for adjustment of a heritability measure in a family study using a multigenerational sample. The sample consisted of 230 community-dwelling extended families that included 431 adult offspring, comprising full siblings, half siblings and cousins that participated in the University of Southern California Longitudinal Study of Generations. All participants filled out the Center for Epidemiologic Studies Depression (CES-D) scale. The multilevel analysis allowed us to model the natural hierarchy of the extended family. Results indicate that the proportion of the phenotypic variance for CES-D that occurs due to genetic differences is not significantly larger than zero among these participants [h2 = 8.6%, 95% confidence interval (CI) = 0–57%, p = 0.71]. Our findings suggest that future studies examining depressive symptoms in this sample can focus on non-genetic explanatory factors without the necessity to control for genetic variation. However, our study may be limited by measurement of prevalent depressive symptoms, which may not generalize to lifetime depressive symptoms.
depressive symptoms; heritability; genetic variance; family study; multilevel model
Longitudinal epidemiologic studies with irregularly observed categorical outcomes present considerable analytical challenges. Generalized linear models (GLMs) tolerate without bias only values missing completely at random and assume that all observations contribute equally. A triggered sampling study design and an analysis using inverse intensity weights in a GLM offer promise of effectively addressing both shortcomings. A triggered sampling design generates irregularly spaced outcomes because, in addition to regularly scheduled follow-up interviews, it specifies that data be collected after a “trigger” (a decline in health status during follow-up) occurs. It is intended to mitigate bias introduced by study participant loss to follow-up. For each observation, an inverse intensity weight is calculated from an Anderson-Gill recurrent-event regression model whose events of interest are observed interviews; the weights help to equalize observation contributions. Investigators in the Longitudinal Examination of Attitudes and Preferences (LEAP) Study (1999–2002), a Connecticut study of seriously ill older adults at the end of life, used a triggered sampling design. In this paper, the authors analyze data from the LEAP Study to illustrate the methods and benefits of inverse intensity weighting in GLMs. An additional benefit of the analytical approach presented is that it allows for assessment of the utility of triggered sampling in longitudinal studies.
data collection; generalized linear model; longitudinal studies; sampling plan; weighting
The objective of this study was to identify the factors associated with recovery of prehospital function among older persons admitted to a nursing home with disability after an acute hospitalization.
The analytic sample included 292 participants of an ongoing cohort study who had one or more admissions to a nursing home with disability after an acute hospitalization during nearly 10 years of follow-up, yielding a total of 364 “index” nursing home admissions. Information on nursing home admissions, hospitalizations, and disability in essential activities of daily living was ascertained during monthly telephone interviews. Data on potential predictors of functional recovery were collected during comprehensive assessments, which were completed every 18 months for 90 months. Participants were considered to have recovered if they were discharged home within 6 months of their nursing home admission at (or above) their prehospital level of function.
Recovery of prehospital function was observed for 115 (31.6%) of the 364 index nursing home admissions. In the multivariate analysis, the strongest associations were observed for the best category of performance, relative to the poorest category, for gross motor coordination (hazard ratio [HR] 13.5, 95% confidence interval [CI] 4.02–45.0) and manual dexterity (HR 10.0, 95% CI 2.94–34.3). Only two other factors were independently associated with recovery of prehospital function: not cognitively impaired (HR 3.0, 95% CI 1.46–6.14) and no significant weight loss (HR 1.96, 95% CI 1.06–3.63).
In the setting of an acute hospitalization leading to a nursing home admission with disability, the likelihood of recovering prehospital function is low. The factors associated with recovery include faster performance on tests of gross motor coordination and manual dexterity and the absence of cognitive impairment and significant weight loss.
Disability evaluation; Nursing homes; Cohort studies
Although depressive symptoms in older persons are common, their association with disability burden is not well understood. The authors evaluated the association between level of depressive symptoms and severity of subsequent disability over time and determined whether this relationship differed by sex.
Participants included 754 community-living persons aged 70 years or older who underwent monthly assessments of disability in four essential activities of daily living for up to 117 months. Disability was categorized each month as none, mild, and severe. Depressive symptoms, assessed every 18 months, were categorized as low (referent group), moderate, and high. Multinomial logit models invoking Generalized Estimating Equation were used to calculate odds ratios and 95% confidence intervals.
Moderate (odds ratio = 1.30; 95% confidence interval: 1.18–1.43) and high (odds ratio = 1.68; 95% confidence interval: 1.50–1.88) depressive symptoms were associated with mild disability, whereas only high depressive symptoms were associated with severe disability (odds ratio = 2.05; 95% confidence interval: 1.76–2.39). Depressive symptoms were associated with disability burden in both men and women, with modest differences by sex; men had an increased likelihood of experiencing severe disability at both moderate and high levels of depressive symptoms, whereas only high depressive symptoms were associated with severe disability in women.
Levels of depressive symptoms below the threshold for subsyndromal depression are associated with increased disability burden in older persons. Identifying and treating varying levels of depressive symptoms in older persons may ultimately help to reduce the burden of disability in this population.
Aging; Depression; Disability; Prospective studies; Sex differences
The CREB1/ATF1 pathway is activated in canine and human rods and cones undergoing degeneration. The pathway is also activated by exposure to the neuroprotective agent CNTF. These data suggest that CREB1/ATF1 contributes to an innate protective response and is of potential therapeutic value in the treatment of RP and AMD.
The cAMP response element binding protein 1 (CREB1) and activating transcription factor 1 (ATF1) are closely related members of the bZIP superfamily of transcription factors. Both are activated in response to a wide array of stimuli, including cellular stress. This study was conducted to assess the CREB1/ATF1 pathway in photoreceptor disease and protection.
The expression levels of p-CREB1, CREB1, and ATF1 were examined by immunoblot and immunohistochemistry in normal canine retina and retinas of several canine models of retinal degeneration (rcd1, rcd2, erd, prcd, XLPRA1, XLPRA2, T4R RHO). Humans retinas affected with age-related macular degeneration (AMD) were also examined. p-CREB1/ATF1 immunolabeling was assessed in normal and rcd1 dogs treated with ciliary neurotrophic factor (CNTF), to examine the effect of a neuroprotective stimulus on activation of CREB1/ATF1.
Native CREB1 and ATF1 as well as phosphorylated CREB1/ATF1 was examined in normal canine retina by immunoblot. The p-CREB1 antibody identified phosphorylated CREB1 and ATF1 and labeled the inner retina only in normal dogs. In degenerate canine and human retinas, strong immunolabeling appeared in rod and cone photoreceptors, indicating increased expression of native CREB1 and ATF1, as well as increased phosphorylation of these proteins. Retinal protection by CNTF in rcd1 dogs was accompanied by a significant increase in the number of p-CREB1/ATF1-labeled photoreceptor nuclei.
Positive association of CREB1/ATF1 phosphorylation with photoreceptor protection suggests that it may contribute to an innate protective response. These data identify a signaling mechanism in rods and cones of potential importance for therapies of RP and AMD.
To identify risk factors for five different subtypes of disability.
Design, Setting and Participants
Prospective cohort study of 754 community-living residents of greater New Haven, Connecticut, who were 70 years or older and initially nondisabled in four essential activities of daily living (bathing, dressing, walking, and transferring).
Candidate risk factors were measured every 18 months for 90 months during comprehensive home-based assessments. Disability was assessed during monthly telephone interviews for up to 108 months. Among participants who were nondisabled at the start of an 18-month interval, incident episodes of five different disability subtypes were determined during the subsequent 18 months: transient, short-term, long-term, recurrent, and unstable.
The cumulative incidence rates (95% confidence intervals) per 100 person-intervals were 9.8 (8.9–10.6) for transient disability, 3.8 (3.3–4.3) for short-term disability, 7.1 (6.4–7.8) for long-term disability, 4.7 (4.1–5.3) for recurrent disability, and 4.4 (3.9–5.0) for unstable disability. In a multivariate analysis, the Short Physical Performance Battery (SPPB) was associated with each of the five disability subtypes, with adjusted hazard ratios ranging from 1.10 for transient disability to 1.35 for long-term disability. The only other factors associated with short-term, long-term, and recurrent disability were stroke, visual impairment, and poor grip strength, respectively. Transient disability and unstable disability shared the same set of risk factors—depressive symptoms, stroke, and poor grip strength—in addition to the SPPB.
Our results provide mixed evidence to support the distinct nature of the five disability subtypes.
aged; cohort studies; risk factors; disability evaluation; activities of daily living
Despite the importance of functional status to older persons and their families, little is known about the course of disability at the end of life.
We evaluated data on 383 decedents from a longitudinal study involving 754 community-dwelling older persons. None of the subjects had disability in essential activities of daily living at the beginning of the study, and the level of disability was ascertained during monthly interviews for more than 10 years. Information on the conditions leading to death was obtained from death certificates and comprehensive assessments that were completed at 18-month intervals after the baseline assessment.
In the last year of life, five distinct trajectories were identified, from no disability to the most severe disability: 65 subjects had no disability (17.0%), 76 had catastrophic disability (19.8%), 67 had accelerated disability (17.5%), 91 had progressive disability (23.8%), and 84 had persistently severe disability (21.9%). The most common condition leading to death was frailty (in 107 subjects [27.9%]), followed by organ failure (in 82 subjects [21.4%]), cancer (in 74 subjects [19.3%]), other causes (in 57 subjects [14.9%]), advanced dementia (in 53 subjects [13.8%]), and sudden death (in 10 subjects [2.6%]). When the distribution of the disability trajectories was evaluated according to the conditions leading to death, a predominant trajectory was observed only for subjects who died from advanced dementia (67.9% of these subjects had a trajectory of persistently severe disability) and sudden death (50.0% of these subjects had no disability). For the four other conditions leading to death, no more than 34% of the subjects had any of the disability trajectories. The distribution of disability trajectories was particularly heterogeneous among the subjects with organ failure (from 12.2 to 32.9% of the subjects followed a specific trajectory) and frailty (from 14.0 to 27.1% of the subjects followed a specific trajectory).
In most of the decedents, the course of disability in the last year of life did not follow a predictable pattern based on the condition leading to death.
An increasing number of patients have medical conditions with altered host immunity or that require immunosuppressive medications. While immunosuppression is associated with increased risk of infection, the precise effect of immunosuppression on innate immunity is not well understood. We studied monocyte Toll-like receptor (TLR) expression and cytokine production in 137 patients with autoimmune diseases who were maintained on immunosuppressive medications and 419 non-immunosuppressed individuals.
Human peripheral blood monocytes were assessed for surface expression of TLRs 1, 2, and 4. After incubation with TLR agonists, in vitro production of the cytokines IL-8, TNFα, and MIF were measured by ELISA as a measure of TLR signaling efficiency and downstream effector responsiveness. Immunosuppressed patients had significantly higher TLR4 surface expression when compared to non-immunosuppressed adults (TLR4 %-positive 70.12±2.28 vs. 61.72±2.05, p = 0.0008). IL-8 and TNF-α baseline levels did not differ, but were significantly higher in the autoimmune disease group following TLR stimulation. By contrast, baseline MIF levels were elevated in monocytes from immunosuppressed individuals. By multivariable analyses, IL-8 and TNFα, but not MIF levels, were associated with the diagnosis of an underlying autoimmune disease. However, only MIF levels were significantly associated with the use of immunosuppressive medications.
Our results reveal that an enhanced innate immune response is a feature of patients with autoimmune diseases treated with immunosuppressive agents. The increased risk for infection evident in this patient group may reflect a dysregulation rather than a simple suppression of innate immunity.