It was recently demonstrated that the Clinical Dementia Rating scale (CDR) Sum of Boxes (CDR-SB) score can be used to accurately stage severity of Alzheimer’s dementia (AD) and Mild Cognitive Impairment (MCI)1. However, to date, the utility of those interpretive guidelines has not been cross-validated or applied to a heterogeneous sample of dementia cases. This study sought to cross-validate the staging guidelines proposed by those authors utilizing the National Alzheimer’s Coordinating Center (NACC) database.
There were 12,462 participants (Controls n = 5,115, MCI n = 2,551, dementia all etiologies n = 4796) in the NACC dataset utilized for the current analysis. The previously published cut-scores were applied to the NACC sample and diagnostic accuracy estimates obtained. Next, analyses were restricted to NACC participants with CDR-Global Score (CDR-GS) of 0.5 and ROC curves generated to determine optimal CDR-SB cut-scores for distinguishing MCI from very early dementia.
The previously proposed CDR-SB ranges successfully classified the vast majority of patients across all impairment ranges with a κ of 0.91 and 94% overall correct classification rate. Additionally, the CDR-SB score discriminated between patients diagnosed with MCI and dementia when CDR-GS were restricted to 0.5 (overall AUC = 0.83).
These findings cross-validate the previously published CDR-SB interpretative guidelines for staging dementia severity and extend those findings to a large heterogeneous sample of patients with dementia. Additionally, the CDR-SB scores distinguished MCI from dementia in patients with reasonable accuracy when CDR-GS is restricted to 0.5.
Many studies have investigated factors associated with the rate of decline and evolution from mild cognitive impairment to Alzheimer’s disease (AD) dementia in elderly patients. In this analysis we compared the rates of decline to dementia estimated from three common global measures of cognition: Mini Mental Status Examination (MMSE) score, Clinical Dementia Rating sum of boxes score (CDR-SB), and a neuropsychological tests composite score (CS).
A total of 2,899 subjects in the National Alzheimer’s Coordinating Center Uniform Data Set age 65+ years diagnosed with amnestic mild cognitive impairment (aMCI) were included in this analysis. Population-averaged decline to dementia rates were estimated and compared for standardized MMSE, CDR-SB, and Composite scores using Generalized Estimating Equations (GEE). Associations between rate of decline and several potential correlates of decline were also calculated and compared across measures.
The CDR-SB had the steepest estimated slope, with a decline of .49 standard deviations (SD) per year, followed by the MMSE with .22 SD/year, and finally the CS with .07 SD/year. The rate of decline of the three measures differed significantly in a global test for differences (p<.0001). Age at visit, BMI at visit, APOE ε4 allele status, and race (black vs. white) had significantly different relationships with rate of decline in a global test for difference among the three measures.
These results suggest that both the rate of decline and the effects of AD risk factors on decline to dementia can vary depending on the evaluative measure used.
neuropsychological testing; Alzheimer’s Disease; cognitive assessment; aging
To elucidate the relationship between the two hallmark proteins of Alzheimer's disease (AD), amyloid-β (Aβ) and tau, and clinical decline over time among cognitively normal older individuals.
A longitudinal cohort of clinically and cognitively normal older individuals assessed with baseline lumbar puncture and longitudinal clinical assessments.
Research centers across the United States and Canada.
We examined one hundred seven participants with a Clinical Dementia Rating (CDR) of 0 at baseline examination.
Main Outcome Measures
Using linear mixed effects models, we investigated the relationship between CSF p-tau181p, CSF Aβ1-42 and clinical decline as assessed using longitudinal change in global CDR, CDR-Sum of Boxes (CDR-SB), and the Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-cog).
We found a significant relationship between decreased CSF Aβ1-42 and longitudinal change in global CDR, CDR-SB, and ADAS-cog in individuals with elevated CSF p-tau181p. In the absence of CSF p-tau181p, the effect of CSF Aβ1-42 on longitudinal clinical decline was not significantly different from zero.
In cognitively normal older individuals, Aβ-associated clinical decline over a mean of three years may occur only in the presence of ongoing, “downstream” neurodegeneration.
To determine the proton magnetic resonance spectroscopy (1H MRS) changes in carriers of microtubule-associated protein (MAPT) mutations in a case-control study.
Patients with MAPT mutations (N279K, V337M, R406W, IVS9-10G>T, P301L) from 5 different families (n = 24) underwent MRI and single voxel 1H MRS from the posterior cingulate gyrus inferior precuneus at 3 T. Ten of the patients were symptomatic with median Clinical Dementia Rating sum of boxes score (CDR-SOB) of 6.5 and 14 patients were presymptomatic with CDR-SOB of 0. Age- and sex-matched controls (n = 24) were recruited.
Symptomatic MAPT mutation carriers were characterized by decreased N-acetylaspartate/creatine (NAA/Cr) ratio, an index of neuronal integrity, increased myoinositol (mI)/Cr ratio, a possible marker for glial activity, decreased NAA/mI, and hippocampal atrophy (p < 0.001). Whereas presymptomatic MAPT mutation carriers had elevated mI/Cr and decreased NAA/mI (p < 0.001), NAA/Cr levels and hippocampal volumes were not different from controls. Decrease in NAA/Cr (R2 = 0. 22; p = 0.021) and hippocampal volumes (R2 = 0.46; p < 0.001) were associated with proximity to the expected or actual age at symptom onset in MAPT mutation carriers.
1H MRS metabolite abnormalities characterized by an elevated mI/Cr and decreased NAA/mI are present several years before the onset of symptoms in MAPT mutation carriers. The data suggest an ordered sequencing of the 1H MRS and MRI biomarkers. MI/Cr, a possible index of glial proliferation, precedes the decrease in neuronal integrity marker NAA/Cr and hippocampal atrophy. 1H MRS may be a useful inclusion biomarker for preventive trials in presymptomatic carriers of MAPT mutations and possibly other proteinopathies.
= automated anatomic labeling;
= Alzheimer disease;
= Clinical Dementia Rating sum of boxes score;
= frontotemporal dementia;
= frontotemporal lobar degeneration;
= gray matter;
= Montreal Neurological Institute;
= magnetic resonance;
= magnetic resonance spectroscopy;
= region of interest;
= statistical parametric mapping;
= single voxel;
= white matter.
To determine whether clinical assessment methods that grade the severity of impairments within the spectrum of mild cognitive impairment (MCI) can predict clinical course, particularly among very mildly impaired individuals who do not meet formal MCI criteria as implemented in clinical trials.
From a longitudinal study of normal (Clinical Dementia Rating [CDR]=0; n=77) and mildly impaired (CDR=0.5; n=167) participants with 5 or more annual clinical assessments, baseline level of cognitive impairment in daily life was graded using CDR sum of boxes (CDR-SB) and level of cognitive performance impairment was graded using neuropsychological test scores.
Main Outcome Measures:
Five-year outcome measures included (1) probable Alzheimer disease (AD) diagnosis and (2) clinical “decline” (CDR-SB increase ≥1.0). Logistic regression models were used to assess the ability of baseline measures to predict outcomes in the full sample and separately in the subjects who did not meet formal MCI criteria as implemented in a multicenter clinical trial (n = 125; “very mild cognitive impairment” [vMCI]).
The presence of both higher CDR-SB and lower verbal memory and executive function at baseline predicted greater likelihood of probable AD and decline. Five-year rates of probable AD and decline in vMCI (20%, AD; 49%, decline) were intermediate between normal participants (0%, AD; 28%, decline) and participants with MCI (41%, AD; 62%, decline). Within vMCI, likelihood of probable AD was predicted by higher CDR-SB and lower executive function.
Even in very mildly impaired individuals who do not meet strict MCI criteria as implemented in clinical trials, the degree of cognitive impairment in daily life and performance on neuropsychological testing predict likelihood of an AD diagnosis within 5 years. The clinical determination of relative severity of impairment along the spectrum of MCI may be valuable for trials of putative disease-modifying compounds, particularly as target populations are broadened to include less impaired individuals.
An item response theory (IRT)-based scoring approach to the Clinical Dementia Rating (CDR) scale can account for the pattern of scores across the CDR items (domains) and the items’ differential abilities to indicate dementia severity. In doing so, an IRT-based approach can provide greater precision than other CDR scoring algorithms. However, neither a good set of item parameters nor an easily digestible set of instructions needed to implement this approach is readily available.
Participants were 1,326 patients at the Baylor College of Medicine Alzheimer’s Disease and Memory Disorders Clinic.
The item parameters necessary for an IRT-based scoring approach were identified (a parameters ranged from 3.01 to 6.22; b parameters ranged from −2.46 to 2.07).
This study provides and demonstrates how to easily apply IRT-based item parameters for the CDR.
Alzheimer’s disease; Assessment; Clinical Dementia Rating Scale; Item Response Theory; Statistics
Diagnostic drift characterizes change in diagnosis and diagnostic classification over time. The Clinical Dementia Rating (CDR) is used commonly in dementia diagnosis and staging of dementia severity. Whether increasing efforts to diagnose dementia at earlier symptomatic stages has led to diagnostic drift in the CDR is unknown.
To examine dementia severity as determined by the CDR over time.
Secondary analysis of data from longitudinal studies of aging and dementia.
An Alzheimer’s Disease Research Center (ADRC), where a variety of clinicians contributed CDR ratings over the course of the study.
Adults aged 63 to 83 years with no (CDR 0), very mild (CDR 0.5) or mild (CDR 1) dementia enrolled in the ADRC at any time from August 1979 to May 2007.
Main Outcome Measures
Within each CDR group changes in scores on standardized psychometric tests with time were examined using multiple linear regression analyses. These tests included the Mini Mental State Examination, Short Blessed Test, Wechsler Memory Scale Logical Memory IA-Immediate, Blessed Dementia Scale, and a psychometric composite score.
A total of 1768 participants met inclusion criteria. Over time, participants were older, more educated, more likely to be minorities, and less likely to be male. Statistically significant change in psychometric test performance over time occurred only within the CDR 1 group for Logical Memory and the psychometric composite, but the degree of change was minimal.
Despite changes in participant characteristics, the CDR demonstrates general stability for assessment of dementia over almost three decades.
Despite numerous studies on the role of medial temporal lobe structures in Alzheimer's disease (AD), the magnitude and clinical significance of amygdala atrophy has been relatively sparsely investigated. In this study we compared the level of amygdala atrophy to that of the hippocampus in very mild and mild AD subjects in two large samples (Sample 1 n=90; Sample 2 n=174). Using a series of linear regression analyses, we investigated whether amygdala atrophy is related to global cognitive functioning (Clinical Dementia Rating Sum of Boxes: CDR-SB; Mini Mental State Examination: MMSE) and neuropsychiatric status. Results indicated that amygdala atrophy was comparable to hippocampal atrophy in both samples. MMSE and CDR-SB were strongly related to amygdala atrophy, with amygdala atrophy predicting MMSE scores as well as hippocampal atrophy, but predicting CDR-SB scores less robustly. Amygdala atrophy was related to aberrant motor behavior, with potential relationships to anxiety and irritability. These results suggest that the magnitude of amygdala atrophy is comparable to that of the hippocampus in the earliest clinical stages of AD, and is related to global illness severity. There also appear to be specific relationships between the level of amygdala atrophy and neuropsychiatric symptoms that deserve further investigation.
Hippocampus; Magnetic resonance imaging; Neuropsychiatric symptoms
The aim of this study was to develop a brief informant-based questionnaire, namely the Symptoms of Early Dementia-11 Questionnaire (SED-11Q), for the screening of early dementia. 459 elderly individuals participated, including 39 with mild cognitive impairment in the Clinical Dementia Rating scale (CDR) 0.5, 233 with mild dementia in CDR 1, 106 with moderate dementia in CDR 2, and 81 normal controls in CDR 0. Informants were required to fill out a 13-item questionnaire. Two items were excluded after analyzing sensitivities and specificities. The final version of the SED-11Q assesses memory, daily functioning, social communication, and personality changes. Receiver operator characteristic curves assessed the utility to discriminate between CDR 0 (no dementia) and CDR 1 (mild dementia). The statistically optimal cutoff value of 2/3, which indicated a sensitivity of 0.84 and a specificity of 0.90, can be applied in the clinical setting. In the community setting, a cutoff value of 3/4, which indicated a sensitivity of 0.76 and a specificity of 0.96, is recommended to avoid false positives. The SED-11Q reliably differentiated nondemented from demented individuals when completed by an informant, and thus is practical as a rapid screening tool in general practice, as well as in the community setting, to decide whether to seek further diagnostic confirmation.
Dementia screening; Dementia; Alzheimer's disease; Activities of daily living; Cognitive deficits; Early detection; Mild cognitive impairment; Non-Alzheimer's disease
We previously established reliability and cross-sectional validity of the SIST-M (Structured Interview and Scoring Tool–Massachusetts Alzheimer's Disease Research Center), a shortened version of an instrument shown to predict progression to Alzheimer disease (AD), even among persons with very mild cognitive impairment (vMCI).
To test predictive validity of the SIST-M.
Participants were 342 community-dwelling, non-demented older adults in a longitudinal study. Baseline Clinical Dementia Rating (CDR) ratings were determined by either: 1) clinician interviews or 2) a previously developed computer algorithm based on 60 questions (of a possible 131) extracted from clinician interviews. We developed age+gender+education-adjusted Cox proportional hazards models using CDR-sum-of-boxes (CDR-SB) as the predictor, where CDR-SB was determined by either clinician interview or algorithm; models were run for the full sample (n=342) and among those jointly classified as vMCI using clinician- and algorithm-based CDR ratings (n=156). We directly compared predictive accuracy using time-dependent Receiver Operating Characteristic (ROC) curves.
AD hazard ratios (HRs) were similar for clinician-based and algorithm-based CDR-SB: for a 1-point increment in CDR-SB, respective HRs (95% CI)=3.1 (2.5,3.9) and 2.8 (2.2,3.5); among those with vMCI, respective HRs (95% CI) were 2.2 (1.6,3.2) and 2.1 (1.5,3.0). Similarly high predictive accuracy was achieved: the concordance probability (weighted average of the area-under-the-ROC curves) over follow-up was 0.78 vs. 0.76 using clinician-based vs. algorithm-based CDR-SB.
CDR scores based on items from this shortened interview had high predictive ability for AD – comparable to that using a lengthy clinical interview.
Alzheimer disease; mild cognitive impairment; dementia; CDR; instrument; questionnaire; validity; prediction; psychometric
To establish the CERAD neuropsychological battery as a valid measure of cognitive progression in Alzheimer's disease (AD) by deriving annualized CERAD Total Change Scores and corresponding confidence intervals in AD and controls from which to define clinically meaningful change.
Subjects included 383 Normal Control (NC) and 655 AD subjects with serial data from the CERAD registry database. Annualized CERAD Total Change Scores were derived and Reliable Change Indexes (RCIs) calculated to establish statistically reliable change values. CERAD Change Scores were compared to annualized change scores from the MMSE, Clinical Dementia Rating Scale (CDR) Sum of Boxes, and Blessed Dementia Rating Scale (BDRS).
For the CERAD Total Score, the AD sample showed significantly greater decline than the NC sample over the four year interval, with AD subjects declining an average of 22.2 points compared to the NCs' improving an average 2.8 points from baseline to last visit [Group × Time interaction (F(4,1031) = 246.08, p < .001)]. By Visit 3, the majority of AD subjects (65.2%) showed a degree of cognitive decline that fell outside the RCI. CERAD Change Scores significantly correlated (p<0.001) with MMSE (r = -.66), CDR (r = -.42), and BDRS (r = -.38) change scores.
Results support the utility of the CERAD Total Score as a measure of AD progression and provide comparative data for annualized change in CERAD Total Score and other summary measures.
Progression of Alzheimer’s dementia (AD) is highly variable. Most estimates derive from convenience samples from dementia clinics or research centers where there is substantial potential for survival bias and other distortions. In a population-based sample of incident AD cases, we examined progression of impairment in cognition, function, and neuropsychiatric symptoms, and the influence of selected variables on these domains.
Longitudinal, prospective cohort study
Cache County (Utah)
328 persons diagnosed with Possible/Probable AD
Mini-Mental State Exam (MMSE), Clinical Dementia Rating sum-of-boxes (CDR-sb), and Neuropsychiatric Inventory (NPI).
Over a mean follow-up of 3.80 (range 0.07–12.90) years, the mean (S.D.) annual rates of change were −1.53 (2.69) on the MMSE, 1.44 (1.82) on the CDR-sb, and 2.55 (5.37) scale points on the NPI. Among surviving participants, 30–58% progressed less than one point/year on these measures, even 5–7 years after dementia onset. Rates of change were correlated between MMSE and CDR-sb (r=−0.62, df=201, p<0.001) and between the CDR-sb and NPI (r=0.20, df=206, p<0.004). Females (LR χ2=8.7, df=2, p=0.013) and those with younger onset (LR χ2=5.7, df=2, p=0.058) declined faster on the MMSE. Although one or more APOE ε4 alleles and ever-use of FDA-approved anti-dementia medications were associated with initial MMSE scores, neither was related to the rate of progression in any domain.
A significant proportion of persons with AD progresses slowly. The results underscore differences between population- vs. clinic-based samples and suggest ongoing need to identify factors that may slow the progression of AD.
Alzheimer’s disease; dementia; cognition; neuropsychiatric symptoms; progression; decline
To appraise the relationship of a task assessing memory for recent autobiographical events and those of two commonly used brief memory tasks with the results of a clinical assessment for dementia.
Design, Setting, and Participants
We compared correlations between a task assessing recall of recent autobiographical events and two frequently-used brief clinical memory measures with dementia ratings by clinicians. Participants were enrolled in Washington University Alzheimer’s Disease Research Center studies, were aged 60 years or above, and took part in assessments between May 2002 and August 2005 (N=425).
Main Outcome Measures
Nonparametric, rank-based Spearman correlations, adjusted for age and education, between the Clinical Dementia Rating Sum of Boxes (CDR-SB) and scores on the autobiographical recall query and two clinical memory tasks taken from the Mini-Mental State Exam and the Short Blessed Test.
The autobiographical recall task and each of the other brief clinical measures correlated significantly with the CDR-SB (p<.0001). The autobiographical recall task had a significantly higher correlation (p<.0001) with the CDR-SB than the two commonly-used clinical memory measures.
Clinicians may find autobiographical memories an important indicator of clinical memory function and the autobiographical query a useful tool when assessing for dementia.
To evaluate the cognitive reserve hypothesis by examining whether individuals of greater educational attainment have better cognitive function than individuals with less education in the presence of elevated fibrillar brain amyloid.
Design, Setting, and Participants
Uptake of N-methyl-[11C]2-(4′-methylaminophenyl)-6-hydroybenzothiazole, or [11C]PIB for “Pittsburgh Compound-B,” was measured for participants assessed between August 15, 2003 and January 8, 2008 at the Washington University Alzheimer’s Disease Research Center and diagnosed either as nondemented (N=161) or with dementia of the Alzheimer type (N=37). Multiple regression was used to determine whether [11C]PIB uptake interacted with level of educational attainment to predict cognitive function.
Main Outcome Measures
Scores on the Clinical Dementia Rating - Sum of Boxes (CDR-SB), Mini-Mental State Exam (MMSE), and Short Blessed Test (SBT), and individual measures from a psychometric battery.
[11C]PIB uptake interacted with years of education in predicting scores on the CDR-SB (p=.003), the MMSE (p<.001), the SBT (p=.03) and a measure of verbal abstract reasoning and conceptualization (p=.02), such that performance on these measures increased with increasing education for participants with elevated PIB uptake. Education was unrelated to global cognitive functioning scores among those with lower PIB uptake.
These results support the hypothesis that cognitive reserve influences the association between Alzheimer disease pathology and cognition.
To determine whether high blood pressure (BP) levels are associated with faster decline in specific cognitive domains.
Prospective longitudinal cohort.
Uniform Data Set of the National Institutes of Health, National Institute on Aging Alzheimer's Disease Centers.
One thousand three hundred eighty-five participants with a diagnosis of mild cognitive impairment (MCI) and measured BP values at baseline and two annual follow-up visits.
Neuropsychological test scores and Clinical Dementia Rating Sum of Boxes (CDR Sum) score.
Participants with MCI with two or three annual occasions of high BP values (systolic BP ≥ 140 mmHg or diastolic BP ≥ 90 mmHg) had significantly faster decline on neuropsychological measures of visuomotor sequencing, set shifting, and naming than those who were normotensive on all three occasions. High systolic BP values were associated as well with faster decline on the CDR Sum score.
Hypertension is associated with faster cognitive decline in persons at risk for dementia.
cerebrovascular disease; dementia; hypertension; mild cognitive impairment; neuropsychology
Decision tables can be used to represent practice guidelines effectively. In this study we adopt the powerful probabilistic framework of Bayesian Networks (BN) for the induction of decision tables. We discuss the simplest BN model, the Naive Bayes and extend it to the Two-Stage Naive Bayes. We show that reversal of edges in Naive Bayes and Two-stage Naive Bayes results in simple decision table and hierarchical decision table respectively. We induce these graphical models for dementia severity staging using the Clinical Dementia Rating Scale (CDRS) database from the University of California, Irvine, Alzheimer's Disease Research Center. These induced models capture the two-stage methodology clinicians use in computing the global CDR score by first computing the six category scores of memory, orientation, judgment and problem solving, community affairs, home and hobbies and personal care, and then the global CDRS. The induced Two-Stage models also attain a clinically acceptable performance when compared to domain experts and could serve as useful guidelines for dementia severity staging.
Behavioral variant frontotemporal dementia (bvFTD) strikes hardest at the frontal lobes, but the sites of earliest injury remain unclear.
To determine atrophy patterns in distinct clinical stages of bvFTD, testing the hypothesis that the mildest stage is restricted to frontal paralimbic cortex.
A bvFTD cohort study.
University hospital dementia clinic.
Patients with bvFTD with Clinical Dementia Rating (CDR) scale scores of 0.5 (n=15), 1 (n=15), or 2 to 3 (n=15) age and sex matched to each other and to 45 healthy controls.
Main Outcome Measures:
Magnetic resonance voxel-based morphometry estimated gray matter and white matter atrophy at each disease stage compared with controls.
Patients with a CDR score of 0.5 had gray matter loss in frontal paralimbic cortices, but atrophy also involved a network of anterior cortical and subcortical regions. A CDR score of 1 showed more extensive frontal gray matter atrophy and white matter losses in corpus callosum and brainstem. A CDR score of 2 to 3 showed additional posterior insula, hippocampus, and parietal involvement, with white matter atrophy in presumed frontal projection fibers.
Very mild bvFTD targets a specific subset of frontal and insular regions. More advanced disease affects white matter and posterior gray matter structures densely interconnected with the sites of earliest injury.
To evaluate the spatial pattern and regional rates of neocortical atrophy from normal aging to early Alzheimer disease (AD).
Longitudinal MRI data were analyzed using high-throughput image analysis procedures for 472 individuals diagnosed as normal, mild cognitive impairment (MCI), or AD. Participants were divided into 4 groups based on Clinical Dementia Rating Sum of Boxes score (CDR-SB). Annual atrophy rates were derived by calculating percent cortical volume loss between baseline and 12-month scans. Repeated-measures analyses of covariance were used to evaluate group differences in atrophy rates across regions as a function of impairment. Planned comparisons were used to evaluate the change in atrophy rates across levels of disease severity.
In patients with MCI–CDR-SB 0.5–1, annual atrophy rates were greatest in medial temporal, middle and inferior lateral temporal, inferior parietal, and posterior cingulate. With increased impairment (MCI–CDR-SB 1.5–2.5), atrophy spread to parietal, frontal, and lateral occipital cortex, followed by anterior cingulate cortex. Analysis of regional trajectories revealed increasing rates of atrophy across all neocortical regions with clinical impairment. However, increases in atrophy rates were greater in early disease within medial temporal cortex, whereas increases in atrophy rates were greater at later stages in prefrontal, parietal, posterior temporal, parietal, and cingulate cortex.
Atrophy is not uniform across regions, nor does it follow a linear trajectory. Knowledge of the spatial pattern and rate of decline across the spectrum from normal aging to Alzheimer disease can provide valuable information for detecting early disease and monitoring treatment effects at different stages of disease progression.
= Alzheimer disease;
= Alzheimer’s Disease Neuroimaging Initiative;
= analysis of covariance;
= Clinical Dementia Rating;
= Clinical Dementia Rating Sum of Boxes score;
= mild cognitive impairment;
= Mini-Mental State Examination;
= Principal Investigator;
= region of interest;
= total intracranial volume;
= University of California, San Diego.
This paper describes a large multi-institutional analysis of the shape and structure of the human hippocampus in the aging brain as measured via MRI. The study was conducted on a population of 101 controls (n=57) with Clinical Dementia Rating (CDR) score 0 and subjects clinically diagnosed with Alzheimer’s Disease (AD, 28 with CDR 0.5 and 10 with CDR 1) or semantic dementia (SD, 4 with CDR 0.5 and 2 with CDR 0) with imaging data collected at Washington University in St. Louis, hippocampal structure annotated at the Massachusetts General Hospital, and anatomical shapes embedded into a metric shape space using large deformation diffeomorphic mapping (LDDMM) at the Johns Hopkins University. A global classifier was constructed for discriminating cohorts of nondemented (CDR 0) and demented (CDR 0.5 or 1) subjects based on linear discriminant analysis of dimensions derived from metric distances between anatomical shapes demonstrating class conditional structure measured via LDDMM metric shape (p < .01). Localized analysis of the control and AD subjects only on the coordinates of the population template demonstrates shape changes in the subiculum and the CA1 subfield in AD (p < .05). Such large scale collaborative analysis of anatomical shapes has the potential to enhance the understanding of neurodevelopmental and neuropsychiatric disorders.
Evidence suggests that cardiovascular medications, including statins and antihypertensive medications, may delay cognitive decline in patients with Alzheimer dementia (AD). We examined the association of cardiovascular medication use and rate of functional decline in a population-based cohort of individuals with incident AD.
In the Dementia Progression Study of the Cache County Study on Memory, Health, and Aging, 216 individuals with incident AD were identified and followed longitudinally with in-home visits for a mean of 3.0 years and 2.1 follow-up visits. The Clinical Dementia Rating (CDR) was completed at each follow-up. Medication use was inventoried during in-home visits. Generalized least-squares random-effects regression was performed with CDR Sum of Boxes (CDR-Sum) as the outcome and cardiovascular medication use as the major predictors.
CDR-Sum increased an average of 1.69 points annually, indicating a steady decline in functioning. After adjustment for demographic variables and the baseline presence of cardiovascular conditions, use of statins (p = 0.03) and beta-blockers (p = 0.04) was associated with a slower annual rate of increase in CDR-Sum (slower rate of functional decline) of 0.75 and 0.68 points respectively, while diuretic use was associated with a faster rate of increase in CDR-Sum (p = 0.01; 0.96 points annually). Use of calcium-channel blockers, angiotensin-converting enzyme inhibitors, digoxin, or nitrates did not affect the rate of functional decline.
In this population-based study of individuals with incident AD, use of statins and beta-blockers was associated with delay of functional decline. Further studies are needed to confirm these results and to determine whether treatment with these medications may help delay AD progression.
Alzheimer disease; risk factors in epidemiology; medications; prognosis
The Clinical Dementia Rating (CDR) and CDR-Sum-of-Boxes (CDR-SB) can be utilized to grade mild but clinically important cognitive symptoms. However, sensitive clinical interview formats are lengthy.
To develop a brief instrument for obtaining CDR scores, and to assess its reliability and cross-sectional validity.
Using legacy data from expanded interviews conducted among 347 community-dwelling, older adults in a longitudinal study, we identified 60 questions about cognitive functioning in daily life–out of a possible 131– using clinical judgment, inter-item correlations, and principal components analysis. Items were selected in one cohort (n=147), and a computer algorithm for generating CDR scores was developed in this same cohort and re-run in a replication cohort (n=200) to evaluate how well the 60 items retained information from the original 131. Then, short interviews based on the 60 items were administered to 50 consecutively-recruited elders, with no or mild cognitive symptoms, at an Alzheimer Disease Research Center. CDR scores based on short interviews were compared with those from independent long interviews.
In the replication cohort, agreement between short and long CDR interviews ranged from κ =0.65–0.79, with κ =0.76 for Memory; κ =0.77 for global CDR; ICC (intra-class correlation coefficient) for CDR-SB=0.89. In the cross-sectional validation, short interview scores were slightly lower than those from long interviews, but good agreement was observed: κ ≥ 0.70 for global CDR and Memory; ICC for CDR-SB=0.73.
The SIST-M is a brief, reliable and sensitive instrument for obtaining CDR scores in persons with symptoms along the spectrum of mild cognitive change.
Alzheimer disease; mild cognitive impairment; Clinical Dementia Rating; instrument; questionnaire; clinical interview
In the community at large, many older adults with minimal cognitive and functional impairment remain stable or improve over time, unlike patients in clinical research settings, who typically progress to dementia. Within a prospective population-based study, we identified neuropsychological tests predicting improvement or worsening over one year in cognitively-driven everyday functioning as measured by Clinical Dementia Rating (CDR). Participants were 1682 adults aged 65+ and dementia-free at baseline. CDR change was modeled as a function of baseline test scores, adjusting for demographics. Among those with baseline CDR=0.5, 29.8% improved to CDR=0; they had significantly better baseline scores on most tests. In a stepwise multiple logistic regression model, tests which remained independently associated with subsequent CDR improvement were Category Fluency, a modified Token Test, and the sum of learning trials on Object Memory Evaluation. In contrast, only 7.1% with baseline CDR=0 worsened to CDR=0.5. They had significantly lower baseline scores on most tests. In multiple regression analyses, only the Mini-Mental State Exam, delayed memory for visual reproduction, and recall susceptible to proactive interference, were independently associated with CDR worsening. At the population level, changes in both directions are observable in functional status, with different neuropsychological measures predicting the direction of change.
Epidemiology; community; aging; Clinical Dementia Rating; cognition; prediction
This study was conducted to examine the reliability, validity and clinical utility of the Severe Impairment Battery (SIB) for a Korean population. 69 dementia patients with Clinical Dementia Rating (CDR) stages 2 or 3 were participated in this study. The SIB, Korean version-Mini Mental State Examination (K-MMSE), CDR, and Seoul-Activities of Daily Living (S-ADL) were administered. The validity of the SIB was confirmed by evaluating the correlation coefficients between the SIB and K-MMSE, CDR, S-ADL, which were found to be significant. Cronbach's alpha for the total SIB score and each subscale score showed high significance, and the item-total correlation for each subscale was also acceptable. The test-retest correlation for the total SIB score and subscale scores were significant, except for the praxis and orienting to name. The total SIB score and subscale scores were examined according to CDR. The results suggest that the SIB can differentiate the poor performances of severely impaired dementia patients. On the basis of the receiver operating characteristic (ROC), it can be concluded that the SIB is able to accurately discriminate between CDR 2 and 3 patients. The results of this study suggest that the SIB is a reliable and valid instrument for evaluating severe dementia patients in Korean population.
Dementia; Neurophysiological Tests; Severe Impairment Battery; Reproducibility of Results; Reliability and Validity; ROC Curve
There is increasing interest in identifying novel cognitive paradigms to help detect preclinical dementia. Promising results have been found in clinical settings using the Semantic Interference Test (SIT), a modification of an existing episodic memory test (Fuld Object Memory Evaluation) that exploits vulnerability to semantic interference in Alzheimer’s disease. It is not yet known how broadly this work will generalize to the community at large.
Participants aged ≥65 years from the Monongahela-Youghiogheny Healthy Aging Team (MYHAT) were administered the SIT at study entry. Independent of neuropsychological assessment, participants were rated on the Clinical Dementia Rating (CDR) scale, based on reported loss of cognitively-driven everyday functioning. In individuals free of dementia (CDR <1), the concurrent validity of the SIT was assessed by determining its association with CDR using multiple logistic regression models, with CDR 0 (no dementia) vs. 0.5 (possible dementia) as the outcome and the SIT test variables as predictors.
Poorer performance on all SIT variables but one was associated with higher CDR reflecting possible dementia (Odds Ratios 2.24 to 4.79). Younger age and female gender also conferred a performance advantage. Years of education and reading ability (a proxy for quality of education) evidenced a very weak association with SIT performance.
The SIT shows promise as a valid, novel measure to identify early preclinical dementia in a community setting. It has potential utility for assessment of persons who may be illiterate or of low education. Finally, we provide normative SIT data stratified by age which may be utilized by clinicians or researchers in future investigations.
neuropsychological tests; norms; cognitive aging; Semantic Interference Test; Alzheimer’s disease
To evaluate whether ratings on Clinical Dementia Rating (CDR) items related to instrumental activities of daily living (IADL) are associated with cognitive or brain morphometric characteristics of participants with mild cognitive impairment (MCI) and global CDR scores of 0.5.
Baseline cognitive and morphometric data were analyzed for 283 individuals with MCI who were divided into 2 groups (impaired and intact) based on their scores on the 3 CDR categories assessing IADL. Rates of progression to Alzheimer disease (AD) over 2 years were also compared in the 2 groups.
The impaired IADL MCI group showed a more widespread pattern of gray matter loss involving frontal and parietal regions, worse episodic memory and executive functions, and a higher percentage of individuals progressing to AD than the relatively intact IADL MCI group.
The results demonstrate the importance of considering functional information captured by the CDR when evaluating individuals with MCI, even though it is not given equal weight in the assignment of the global CDR score. Worse impairment on IADL items was associated with greater involvement of brain regions beyond the mesial temporal lobe. The conventional practice of relying on the global CDR score as currently computed underutilizes valuable IADL information available in the scale, and may delay identification of an important subset of individuals with MCI who are at higher risk of clinical decline.