To examine the effects of caregiver and patient characteristics on caregivers’ medical care use and cost.
147 caregiver/patient dyads were followed annually for 6 years in 3 academic AD centers in the US. Logistic, negative binomial, and generalized linear mixed models were used to examine overall effects of caregiver/patient characteristics on caregivers’ hospitalizations, doctor visits, outpatient tests and procedures, and prescription and over-the-counter medications.
Patients’ comorbid conditions and dependence were associated with increased healthcare use and costs of caregivers. Increases in caregiver depressive symptoms are associated with increases in multiple domains of caregivers’ healthcare use and costs.
Findings suggest that we should expand our focus on dementia patients to include family caregivers to obtain a fuller picture of effects of caregiving. Primary care providers should integrate caregivers’ needs in healthcare planning and delivery. Clinical interventions that treat patients and caregivers as a whole will likely achieve the greatest beneficial effects.
caregiving; medical care; cost; dementia; Alzheimer’s disease; longitudinal study
The extent to which social cognitive changes reflect a discrete constellation of symptoms dissociable from general cognitive changes in Alzheimer’s disease (AD) is unclear. Moreover, whether social cognitive symptoms contribute to disease severity and progression is unknown. The current multicenter study investigated cross-sectional and longitudinal associations between social cognition, general cognition, and dependence in 517 participants with Probable AD. Participants were followed every 6-months for 5.5 years. Results from multivariate latent growth curve models adjusted for sex, age, education, depression, and recruitment site revealed that social cognition and general cognition were unrelated cross-sectionally and over time. However, baseline levels of each were independently related to dependence, and change values of each were independently related to change in dependence. These findings highlight the separability of social and general cognition in AD. Results underscore the relevance of considering social cognition when modeling disease and estimating clinical outcomes related to patient disability.
Alzheimer’s disease; social cognition; cognition; dependence
The clinical syndrome of Huntington’s disease is notable for a triad of motor, cognitive and emotional features. All HD patients eventually become occupationally disabled; however the factors that render HD patients unable to maintain employment have not been extensively studied. This review begins by discussing the clinical triad of HD, highlighting the distinction in the motor disorder between involuntary movements such as chorea, and voluntary movement impairment, with the latter contributing more to functional disability. Cognitive disorder clearly contributes to disability, though the relative contribution compared to motor is difficult to unravel, especially since many of the tests used to asses “cognition” have a strong motor component. The role of emotional changes in disability needs more study. The literature on contributions to functional disability, driving impairment and nursing home placement is reviewed. Relevant experience is presented from the longstanding JHU HD observational study on motor vs cognitive onset, and on cognitive and motor features at the time when individuals discontinued working. Finally, we briefly review government policies in several countries on disability determination. We interpret the data from our own studies and from the literature to indicate that there is usually a close relationship between cognitive and motor dysfunction, and that it is critical to take both into consideration in determining disability.
Although disturbed sleep is associated with cognitive deficits, the association
between sleep disturbance and Alzheimer’s disease (AD) pathology is unclear. In
this pilot study, we examined the extent to which sleep duration, sleep quality, and
sleep-disordered breathing (SDB) are associated with β-amyloid (Aβ)
deposition in the brains of living humans.
We studied 13 older adults (8 with normal cognition and 5 with mild cognitive
impairment (MCI)). Participants completed neuropsychological testing, polysomnography
and Aβ imaging with [11C]-Pittsburgh compound B.
Among participants with MCI, higher apnea-hypopnea index and oxygen
desaturation index were associated with greater Aβ deposition, globally and
regionally in the precuneus. There were no significant associations between SDB and
Aβ deposition among cognitively normal participants. There were no significant
associations between sleep duration or sleep fragmentation and Aβ
These preliminary results suggest that, among older adults with MCI, greater
SDB severity is associated with greater Aβ deposition.
sleep; sleep apnea; mild cognitive impairment; Alzheimer’s disease; amyloid; positron emission tomography
The method of loci (MoL) is a complex visuospatial mnemonic strategy. Previous research suggests older adults could potentially benefit from using the MoL, but that it is too attentionally demanding for them to use in practice. We evaluated the hypotheses that training can increase the use of MoL, and that MoL use is associated with better memory.
We analyzed skip patterns on response forms for the Auditory Verbal Learning Test (AVLT) in the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, n=1,401) trial using 5 years of longitudinal follow-up.
At baseline, 2% of participants skipped spaces. Fewer than 2% of control participants skipped spaces at any visit across 5 years, but 25% of memory-trained participants, taught the MoL, did so. Participants who skipped spaces used more serial clustering, a hallmark of the MoL (p<0.001). Trained participants who skipped spaces showed greater memory improvement after training than memory-trained participants who did not skip spaces (Cohen's d=0.84, P=0.007), and did not differ in the subsequent rate of long-term memory decline through up to 5 years of follow-up.
Despite being attentionally demanding, this study suggests that after training, the MoL is used by up to 25% of older adults, and that its use is associated with immediate memory improvement that was sustained through the course of follow-up. Findings are consistent with the notion that older adults balance complexity with novelty in strategy selection, and that changes in strategies used following memory training result in observable qualitative and quantitative differences in memory performance.
Method of loci; strategy use; memory training; gerontology; older adults
Many residents of assisted living (AL) have chronic diseases that are difficult to manage, including congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD) and diabetes mellitus (DM). We estimated the amount and intensity of care delivered by the staff for residents with these conditions.
We performed a secondary data analysis from the Maryland Assisted Living (MDAL) Study (399 residents, 29 facilities). In-person assessments included measures of cognition, function, depression, and general medical health. Diagnosis of CHF, COPD, and DM, as well as current medications was abstracted from AL medical charts. Measures of care utilization were operationalized at the resident level as: 1) minutes per day of direct care (caregiver activity scale [CAS]), 2) subjective staff ratings of care burden, and 3) assigned AL “level of care” (based on state regulatory criteria).
In best fit regression models, CHF and DM were not significant predictors of the evaluated care utilization measures; however, COPD was independently associated with increased minutes per day of direct care – 34% of the variance in the caregiver activity scale was explained by degree of functional dependency, cognitive impairment, age, and presence of COPD. Functional dependency, depressive symptoms, and age explained almost a quarter (23%) of the variance of staff care burden rating. For the AL level of care intensity rating, degree of functional dependency, level of cognition, and age were significant correlates, together explaining about 28% of the variance.
The presence of COPD was a significant predictor of time per day of direct care. However, CHF and DM were not correlates of care utilization measures. Functional and cognitive impairment was associated with measures of care utilization, reiterating the importance of these characteristics in the utilization and intensity of care consumed by AL residents. Further study of this population could reveal other forms and amounts of care utilization.
Chronic diseases; Assisted living; care utilization
There is a lack of empirical evidence about the impact of regulations on dementia care quality in assisted living (AL). We examined cohort differences in dementia recognition and treatment indicators between two cohorts of AL residents with dementia, evaluated prior to and following a dementia-related policy modification to more adequately assess memory and behavioral problems.
Cross-sectional comparison of two AL resident cohorts was done (Cohort 1 [evaluated 2001–2003] and Cohort 2 [evaluated 2004–2006]) from the Maryland Assisted Living studies. Initial in-person evaluations of residents with dementia (n = 248) were performed from a random sample of 28 AL facilities in Maryland (physician examination, clinical characteristics, and staff and family recognition of dementia included). Adequacy of dementia workup and treatment was rated by an expert consensus panel.
Staff recognition of dementia was better in Cohort 1 than in Cohort 2 (77% vs. 63%, p = 0.011), with no significant differences in family recognition (86% vs. 85%, p = 0.680), or complete treatment ratings (52% vs. 64%, p = 0.060). In adjusted logistic regression, cognitive impairment and neuropsychiatric symptoms correlated with staff recognition; and cognitive impairment correlated with family recognition. Increased age and cognitive impairment reduced odds of having a complete dementia workup. Odds of having complete dementia treatment was reduced by age and having more depressive symptoms. Cohort was not predictive of dementia recognition or treatment indicators in adjusted models.
We noted few cohort differences in dementia care indicators after accounting for covariates, and concluded that rates of dementia recognition and treatment did not appear to change much organically following the policy modifications.
aged care; epidemiology; clinical assessment; dementia; long-term care
Randomized-controlled trials that examine the effects of Cholinesterase inhibitors (ChEI) and memantine on patient outcomes over long periods of time are difficult to conduct. Observational studies based on practice-based populations outside the context of controlled trials and open label extension studies that evaluate the effects of these medications over time are limited.
To examine in an observational study (1) relationships between ChEI and memantine use and functional and cognitive endpoints and mortality in AD patients, (2) relationships between other patient characteristics on these clinical endpoints, and (3) whether effects of the predictors change across time.
Multicenter, natural history study.
Three university-based AD centers in the US.
201 patients diagnosed with probable AD with modified Mini-Mental State Examination scores of 30 or higher at study entry followed annually for 6 years.
Discrete-time hazard analyses were used to examine relationships between ChEI and memantine use during the previous 6 months reported at each assessment and time to cognitive (Mini-Mental State Examination, MMSE≤10) and functional (Blessed Dementia Rating Scale, BDRS≥10) endpoints and mortality. Analyses controlled for clinical characteristics including baseline cognition, function, and comorbid conditions, and presence of extrapyramidal signs and psychiatric symptoms at each assessment interval. Demographic characteristics included baseline age, sex, education, and living arrangement at each assessment interval.
ChEI use was associated with delayed time in reaching functional endpoint and death. Memantine use was associated with delayed time to death. Different patient characteristics were associated with different clinical endpoints
Results suggest long term beneficial effects of ChEI and memantine on patient outcomes. As for all observational cohort study, observed relationships should not be interpreted as causal effects.
Alzheimer’s disease; cholinesterase inhibitors; memantine; outcomes; longitudinal studies
A high body mass index (BMI) in middle-age or a decrease in BMI at late-age has been considered a predictor for the development of Alzheimer's disease (AD). However, little is known about the BMI change close to or after AD onset.
BMI of participants from three cohorts, the Washington Heights and Inwood Columbia Aging Project (WHICAP; population-based) and the Predictors Study (clinic-based), and National Alzheimer's Coordinating Center (NACC; clinic-based) were analyzed longitudinally. We used generalized estimating equations to test whether there were significant changes of BMI over time, adjusting for age, sex, education, race, and research center. Stratification analyses were run to determine whether BMI changes depended on baseline BMI status.
BMI declined over time up to AD clinical onset, with an annual decrease of 0.21 (p=0.02) in WHICAP and 0.18 (p=0.04) kg/m2 in NACC. After clinical onset of AD, there was no significant decrease of BMI. BMI even increased (b=0.11, p=0.004) among prevalent AD participants in NACC. During the prodromal period, BMI decreased over time in overweight(BMI ≥25 and <30) WHICAP participants or obese (BMI≥30) NACC participants. After AD onset, BMI tended to increase in underweight/normal weight (BMI<25) patients and decrease in obese patients in all three cohorts, although the results were significant in NACC study only.
Our study suggests that while BMI declines before the clinical AD onset, it levels off after clinical AD onset, and might even increase in prevalent AD. The pattern of BMI change may also depend on the initial BMI.
Body mass index; weight; Alzheimer's disease; prospective study
Although disturbed sleep is associated with cognitive deficits, the association between sleep disturbance and Alzheimer’s disease pathology is unclear. In this pilot study, we examined the extent to which sleep duration, sleep quality, and sleep-disordered breathing are associated with β-amyloid (Aβ) deposition in the brains of living humans.
We studied 13 older adults (8 with normal cognition and 5 with mild cognitive impairment). Participants completed neuropsychological testing, polysomnography, and Aβ imaging with [11C]-Pittsburgh compound B.
Among participants with mild cognitive impairment, higher apnea–hypopnea index and oxygen desaturation index were associated with greater Aβ deposition, globally and regionally in the precuneus. There were no significant associations between sleep-disordered breathing and Aβ deposition among cognitively normal participants. There were no significant associations between sleep duration or sleep fragmentation and Aβ deposition.
These preliminary results suggest that among older adults with mild cognitive impairment, greater sleep-disordered breathing severity is associated with greater Aβ deposition.
Sleep; sleep apnea; mild cognitive impairment; Alzheimer’s disease; amyloid; positron emission tomography
To investigate the influence of memory training on initial recall and learning.
The Advanced Cognitive Training for Independent and Vital Elderly study of community-dwelling adults older than age 65 (n = 1,401). We decomposed trial-level recall in the Auditory Verbal Learning Test (AVLT) and Hopkins Verbal Learning Test (HVLT) into initial recall and learning across trials using latent growth models.
Trial-level increases in words recalled in the AVLT and HVLT at each follow-up visit followed an approximately logarithmic shape. Over the 5-year study period, memory training was associated with slower decline in Trial 1 AVLT recall (Cohen’s d = 0.35, p = .03) and steep pre- and posttraining acceleration in learning (d = 1.56, p < .001). Findings were replicated using the HVLT (decline in initial recall, d = 0.60, p = .01; pre- and posttraining acceleration in learning, d = 3.10, p < .001). Because of the immediate training boost, the memory-trained group had a higher level of recall than the control group through the end of the 5-year study period despite faster decline in learning.
This study contributes to the understanding of the mechanisms by which training benefits memory and expands current knowledge by reporting long-term changes in initial recall and learning, as measured from growth models and by characterization of the impact of memory training on these components. Results reveal that memory training delays the worsening of memory span and boosts learning.
AVLT; Growth modeling; HVLT; Memory training; Older adults
The contribution of executive cognition (EC) to the prediction of incident dementia remains unclear. This prospective study examined the predictive value of EC for subsequent cognitive decline in persons with mild cognitive impairment (MCI) over a 4-year period.
141 persons with MCI (amnestic and non-amnestic, single- and multiple-domain) received a baseline and two biennial follow-up assessments. Eighteen tests, assessing six different aspects of EC, were administered at baseline and at 2-year follow-up, together with screening cognitive and daily functioning measures. Longitudinal logistic regression models and generalized estimating equations (GEE) were used to examine whether EC could predict progression to a Clinical Dementia Rating Scale (CDR) score of 1 or more over the 4-year period, first at the univariate level and then in the context of demographic and clinical characteristics, daily functioning measures and other neurocognitive factors.
Over the 4-year period, 56% of MCI patients remained stable, 35% progressed to CDR≥1, and 8% reverted to normal (CDR=0). Amnestic MCI subtypes were not associated with higher rates of progression to dementia, whereas subtypes with multiple impairments were so associated. Eight out of the 18 EC measures, including all three measures assessing inhibition of prepotent responses, predicted MCI outcome at the univariate level. However, the multivariate GEE model indicated that age, daily functioning, and overall cognitive functioning best predicted progression to dementia.
Measures of EC (i.e., inhibitory control) are associated with MCI outcome. However, age and global measures of cognitive and functional impairment are better predictors of incident dementia.
mild cognitive impairment; predictors; executive cognition; dementia
Compared with in-person assessment methods, telephone screening for dementia and other cognitive syndromes may improve efficiency of large population studies or prevention trials. We used data from the Alzheimer's Disease Anti-Inflammatory Prevention Trial (ADAPT) to compare performance of a four-test Telephone Assessment Battery (TAB) that included the Telephone Interview for Cognitive Status (TICS) to that of a traditional in-person Cognitive Assessment Battery (CAB). Among 1,548 elderly participants with valid telephone and in-person screening results obtained within 90 days of each other, 225 persons were referred for a full cognitive diagnostic evaluation (DE) that was completed within six months of screening. Drawing on results from this panel of 225 individuals, we used the Capture-Recapture method to estimate population numbers of cognitively impaired participants. The latter estimates enabled us to compare the performance characteristics of the two screening batteries at specified cut-offs for detection of dementia and milder forms of impairment. Although our results provide relatively imprecise estimates of the performance characteristics of the two batteries, a comparison of their relative performance suggests that, at selected cut-off points, the TAB produces results broadly comparable to in-person screening and may be slightly more sensitive in detecting mild impairment. TAB performance characteristics also appeared slightly better than those of the TICS alone. Given its benefits in time and cost when screening for cognitive disorders, telephone screening should be considered for large samples.
Telephone; neuropsychology; dementia; Alzheimer disease; mild cognitive impairment; prodromal period; memory disorders; geriatric assessment
Objectives and Methods
Data from the memory training arm (n = 629) of the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) trial were examined to characterize change in memory performance through five years of follow-up as a function of memory training, booster training, adherence to training, and other characteristics.
Latent growth model analyses revealed that memory training was associated with improved memory performance through year five, but that neither booster training nor training adherence significantly influenced this effect. Baseline age was associated with change in memory performance attributable to the passage of time alone (i.e., to aging). Higher education and better self-rated health were associated with greater change in memory performance after training.
These findings confirm that memory training can aid in maintaining long-term improvements in memory performance. Booster training and adherence to training do not appear to attenuate rates of normal age-related memory decline.
ACTIVE trial; memory training; training adherence; cognitive training; aging; memory decline
The ability to predict the length of time to death and institutionalization has strong implications for Alzheimer’s disease patients and caregivers, health policy, economics, and the design of intervention studies.
To develop and validate a prediction algorithm that uses data from a single visit to estimate time to important disease endpoints for individual Alzheimer’s disease patients.
Two separate study cohorts (Predictors 1, N = 252; Predictors 2, N = 254), all initially with mild Alzheimer’s disease, were followed for 10 years at three research centers with semiannual assessments that included cognition, functional capacity, and medical, psychiatric and neurologic information. The prediction algorithm was based on a longitudinal Grade of Membership model developed using the complete series of semiannually-collected Predictors 1 data. The algorithm was validated on the Predictors 2 data using data only from the initial assessment to predict separate survival curves for three outcomes.
For each of the three outcome measures, the predicted survival curves fell well within the 95% confidence intervals of the observed survival curves. Patients were also divided into quintiles for each endpoint to assess the calibration of the algorithm for extreme patient profiles. In all cases, the actual and predicted survival curves were statistically equivalent. Predictive accuracy was maintained even when key baseline variables were excluded, demonstrating the high resilience of the algorithm to missing data.
The new prediction algorithm accurately predicts time to death, institutionalization, and need for full-time care in individual Alzheimer’s disease patients; it can be readily adapted to predict other important disease endpoints. The algorithm will serve an unmet clinical, research, and public health need.
Alzheimer’s disease; prediction algorithm; time to death; nursing home; full-time care; grade of membership model
Huntington’s Disease (HD) is a neurodegenerative disorder characterized by early cognitive decline, which progresses at later stages to dementia and severe movement disorder. HD is caused by a cytosine-adenine-guanine triplet-repeat expansion mutation in the Huntingtin gene, allowing early diagnosis by genetic testing. This study aims to identify the relationship of N-acetylaspartate and other brain metabolites to cognitive function in HD-mutation carriers by using high field strength magnetic-resonance-spectroscopy at 7-Tesla.
Twelve individuals with the HD-mutation in premanifest or early stage of disease versus twelve healthy controls underwent 1H magnetic-resonance-spectroscopy (7.2ml voxel in the posterior cingulate cortex) at 7-Tesla, and also T1-weighted structural magnetic-resonance-imaging. All participants received standardized tests of cognitive functioning including the Montreal Cognitive Assessment and standardized quantified neurological examination within an hour before scanning.
Individuals with the HD mutation had significantly lower posterior cingulate cortex N-acetylaspartate (−9.6%, p=0.02) and glutamate levels (−10.1%, p=0.02) than controls. By contrast, in this small group, measures of brain morphology including striatal and ventricle volumes did not differ significantly. Linear regression with Montreal Cognitive Assessment scores revealed significant correlations with N-acetylaspartate (r2=0.50, p=0.01) and glutamate (r2=0.64, p=0.002) in HD subjects.
Our data suggest a relationship between reduced N-acetylaspartate and glutamate levels in the posterior cingulate cortex with cognitive decline in early stages of HD. N-acetylaspartate and glutamate magnetic-resonance-spectroscopy signals of the posterior cingulate cortex region may serve as potential biomarkers of disease progression or treatment outcome in HD and other neurodegenerative disorders with early cognitive dysfunction, when structural brain changes are still minor.
MRS; NAA; glutamate; cognition; neurodegeneration; biomarker
To estimate the 12-month incidence, prevalence, and persistence of mental disorders among recently admitted assisted living (AL) residents and to describe the recognition and treatment of these disorders.
Two hundred recently admitted AL residents in 21 randomly selected AL facilities in Maryland received comprehensive physician-based cognitive and neuropsychiatric evaluations at baseline and 12 months later. An expert consensus panel adjudicated psychiatric diagnoses (using DSM-IV-TR criteria) and completeness of workup and treatment. Incidence, prevalence, and persistence were derived from the panel's assessment. Family and direct care staff recognition of mental disorders was also assessed.
At baseline, three-quarters suffered from a cognitive disorder (56% dementia, 19% Cognitive Disorders Not Otherwise Specified) and 15% from an active non-cognitive mental disorder. Twelve-month incidence rates for dementia and non-cognitive psychiatric disorders were 17% and 3% respectively, and persistence rates were 89% and 41% respectively. Staff recognition rates for persistent dementias increased over the 12-month period but 25% of cases were still unrecognized at 12 months. Treatment was complete at 12 months for 71% of persistent dementia cases and 43% of persistent non-cognitive psychiatric disorder cases.
Individuals recently admitted to AL are at high risk for having or developing mental disorders and a high proportion of cases, both persistent and incident, go unrecognized or untreated. Routine dementia and psychiatric screening and reassessment should be considered a standard care practice. Further study is needed to determine the longitudinal impact of psychiatric care on resident outcomes and use of facility resources.
incidence; dementia; psychiatric disorder; treatment; recognition
Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research.
Study Design and Setting
The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies.
Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05).
Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms.
equating; equipercentile; neuropsychology; longitudinal analysis; alternate forms; parallel forms
Better tools for assessing cognitive impairment in the early stages of Alzheimer’s disease (AD) are required to enable diagnosis of the disease before substantial neurodegeneration has taken place and to allow detection of subtle changes in the early stages of progression of the disease. The National Institute on Aging and the Alzheimer’s Association convened a meeting to discuss state of the art methods for cognitive assessment, including computerized batteries, as well as new approaches in the pipeline. Speakers described research using novel tests of object recognition, spatial navigation, attentional control, semantic memory, semantic interference, prospective memory, false memory and executive function as among the tools that could provide earlier identification of individuals with AD. In addition to early detection, there is a need for assessments that reflect real-world situations in order to better assess functional disability. It is especially important to develop assessment tools that are useful in ethnically, culturally and linguistically diverse populations as well as in individuals with neurodegenerative disease other than AD.
The Dementia Risk Assessment (DRA) is an online tool consisting of questions about known risk factors for dementia, a novel verbal memory test, and an informant report of cognitive decline. Its primary goal is to educate the public about dementia risk factors and encourage clinical evaluation where appropriate. In Study 1, more than 3,000 anonymous persons over age 50 completed the DRA about themselves; 1,000 people also completed proxy reports about another person. Advanced age, lower education, male sex, complaints of severe memory impairment, and histories of cerebrovascular disease, Parkinson's disease, and brain tumor all contributed significantly to poor memory performance. A high correlation was obtained between proxy-reported decline and actual memory test performance. In Study 2, 52 persons seeking first-time evaluation at dementia clinics completed the DRA prior to their visits. Their responses (and those of their proxy informants) were compared to the results of independent evaluation by geriatric neuropsychiatrists. The 30 patients found to meet criteria for probable Alzheimer's disease, vascular dementia, or frontotemporal dementia differed on the DRA from the 22 patients without dementia (most other neuropsychiatric conditions). Scoring below criterion on the DRA's memory test had moderately high predictive validity for clinically diagnosed dementia. Although additional studies of larger clinical samples are needed, the DRA holds promise for wide-scale screening for dementia risk.
To examine the measurement equivalence of items on disability across three international surveys of aging.
Data for persons aged 65 and older were drawn from the Health and Retirement Survey (HRS, n = 10,905), English Longitudinal Study of Aging (ELSA, n = 5,437), and Survey of Health, Ageing and Retirement in Europe (SHARE, n = 13,408). Differential item functioning (DIF) was assessed using item response theory (IRT) methods for activities of daily living (ADL) and instrumental activities of daily living (IADL) items.
HRS and SHARE exhibited measurement equivalence, but 6 of 11 items in ELSA demonstrated meaningful DIF. At the scale level, this item-level DIF affected scores reflecting greater disability. IRT methods also spread out score distributions and shifted scores higher (toward greater disability). Results for mean disability differences by demographic characteristics, using original and DIF-adjusted scores, were the same overall but differed for some subgroup comparisons involving ELSA.
Testing and adjusting for DIF is one means of minimizing measurement error in cross-national survey comparisons. IRT methods were used to evaluate potential measurement bias in disability comparisons across three international surveys of aging. The analysis also suggested DIF was mitigated for scales including both ADL and IADL and that summary indexes (counts of limitations) likely underestimate mean disability in these international populations.
Activities of daily living; Differential item functioning; Disability
The present study sought to predict changes in everyday functioning using cognitive tests.
Data from the Advanced Cognitive Training for Independent and Vital Elderly trial were used to examine the extent to which competence in different cognitive domains—memory, inductive reasoning, processing speed, and global mental status—predicts prospectively measured everyday functioning among older adults. Coefficients of determination for baseline levels and trajectories of everyday functioning were estimated using parallel process latent growth models.
Each cognitive domain independently predicts a significant proportion of the variance in baseline and trajectory change of everyday functioning, with inductive reasoning explaining the most variance (R2 = .175) in baseline functioning and memory explaining the most variance (R2 = .057) in changes in everyday functioning.
Inductive reasoning is an important determinant of current everyday functioning in community-dwelling older adults, suggesting that successful performance in daily tasks is critically dependent on executive cognitive function. On the other hand, baseline memory function is more important in determining change over time in everyday functioning, suggesting that some participants with low baseline memory function may reflect a subgroup with incipient progressive neurologic disease.
Cognition; Cognitive training; Everyday functioning; Structural equation modeling
Epidemiologic evidence suggests that non-steroidal anti-inflammatory drugs (NSAIDs) delay onset of Alzheimer’s dementia (AD), but randomized trials show no benefit from NSAIDs in symptomatic AD. ADAPT randomized 2,528 elderly persons to naproxen or celecoxib vs. placebo for two years (s.d. 11 months) before treatments were terminated. During the treatment interval, 32 cases of AD revealed increased rates in both NSAID-assigned groups.
We continued the double-masked ADAPT protocol for two additional years to investigate incidence of AD (primary outcome). We then collected cerebrospinal fluid (CSF) from 117 volunteer participants to assess their ratio of CSF tau to Aβ1–42.
Including 40 new events observed during follow-up of 2,071 randomized individuals (92% of participants at treatment cessation), there were now 72 AD cases. Overall NSAID-related harm was no longer evident, but secondary analyses showed that increased risk remained notable in the first 2.5 years of observations, especially in 54 persons enrolled with Cognitive Impairment – No Dementia (CIND). These same analyses showed later reduction in AD incidence among asymptomatic enrollees given naproxen. CSF biomarker assays suggested that the latter result reflected reduced Alzheimer-type neurodegeneration.
These data suggest a revision of the original ADAPT hypothesis that NSAIDs reduce AD risk, thus: NSAIDs have an adverse effect in later stages of AD pathogenesis, while asymptomatic individuals treated with conventional NSAIDs like naproxen experience reduced AD incidence, but only after 2 – 3 years. Thus, treatment effects differ at various stages of disease. This hypothesis is consistent with data from both trials and epidemiological studies.
This study explores the longitudinal relationship between patient characteristics and use of four drug classes (antihypertensives, antidepressants, antipsychotics, and hormones) that showed significant changes in use rates over time in patients with Alzheimer’s disease (AD). Patient/caregiver-reported prescription medication usage was categorized by drug class for 201 patients from the Predictors Study. Patient characteristics included use of cholinesterase inhibitors and/or memantine, function, cognition, living situation, baseline age, and gender. Assessment interval, year of study entry, and site were controlled for. Before adjusting for covariates, use increased for antihypertensives (47.8% to 62.2%), antipsychotics (3.5% to 27.0%), and antidepressants (32.3% to 40.5%); use of hormones decreased (19.4% to 5.4%). After controlling for patient characteristics, effects of time on the use of antidepressants were no longer significant. Antihypertensive use was associated with poorer functioning, concurrent use of memantine, and older age. Antipsychotic use was associated with poorer functioning and poorer cognition. Antidepressant use was associated with younger age, poorer functioning, and concurrent use of cholinesterase inhibitors and memantine. Hormone use was associated with being female and younger age. Findings suggest accurate modeling of the AD treatment paradigm for certain subgroups of patients should include antihypertensives and antipsychotics in addition to cholinesterase inhibitors and memantine.
Alzheimer’s disease; antihypertensive; antidepressant; antipsychotic; hormone; longitudinal studies
Impairments in executive cognition (EC) may be predictive of incident dementia in patients with mild cognitive impairment (MCI). The present study examined whether specific EC tests could predict which MCI individuals progress from a Dementia Rating Scale (CDR) score of 0.5 to a score ≥1 over a 2-year period. Eighteen clinical and experimental EC measures were administered at baseline to 104 MCI patients (amnestic and non-amnestic, single- and multiple-domain) recruited from clinical and research settings. Demographic characteristics, screening cognitive measures and measures of everyday functioning at baseline were also considered as potential predictors. Over the two-year period, 18% of the MCI individuals progressed to CDR≥1, 73.1% remained stable (CDR=0.5), and 4.5% reverted to normal (CDR=0). Multiple-domain MCI participants had higher rates of progression to dementia than single-domain, but amnestic and non-amnestic MCIs had similar rates of conversion. Only three EC measures were predictive of subsequent cognitive and functional decline at the univariate level, but they failed to independently predict progression to dementia after adjusting for demographic, other cognitive characteristics, and measures of everyday functioning. Decline over 2 years was best predicted by informant ratings of subtle functional impairments and lower baseline scores on memory, category fluency and constructional praxis.
Mild cognitive impairment; dementia; predictors of decline; executive cognition; Clinical Dementia Rating scale; MCI outcome