To investigate whether demographic (age and education) adjustments for the Mini-Mental State Examination (MMSE) attenuate mean score discrepancies between African American and Caucasian adults, and to determine whether demographically-adjusted MMSE scores improve the diagnostic classification accuracy of dementia in African American adults when compared to unadjusted MMSE scores.
Community-dwelling adults participating in the Mayo Clinic Alzheimer’s Disease Patient Registry (ADPR) and Alzheimer’s Disease Research Center (ADRC).
Three thousand two hundred fifty-four adults (2819 Caucasian, 435 African American) aged 60 and older.
MMSE at study entry.
African American adults obtained significantly lower unadjusted MMSE scores (23.0 ± 7.4) compared to Caucasian adults (25.3 ± 5.4). This discrepancy persisted despite adjustment of MMSE scores for age and years of education using established regression weights or newly-derived weights. However, controlling for dementia severity at baseline and adjusting MMSE scores for age and quality of education attenuated this discrepancy. Among African American adults, an age- and education-adjusted MMSE cut score of 23/24 provided optimal dementia classification accuracy, but this represented only a modest improvement over an unadjusted MMSE cut score of 22/23. The posterior probability of dementia in African American adults is presented for various unadjusted MMSE cut scores and prevalence rates of dementia.
Age, dementia severity at study entry, and quality of educational experience are important explanatory factors to understand the existing discrepancies in MMSE performance between Caucasian and African American adults. Our findings support the use of unadjusted MMSE scores when screening African American elders for dementia, with an unadjusted MMSE cut score of 22/23 yielding optimal classification accuracy.
MMSE; African American; ethnicity; dementia; cognition
Background. Considerable research documents an association between pro- and anti-inflammatory markers and Alzheimer's disease (AD), yet the differential relation between these markers and neuropsychological functioning in AD and nondemented controls has received less attention. The current study sought to evaluate the relationship between peripheral markers of inflammation (both pro- and anti-inflammatory) and neuropsychological functioning through the Texas Alzheimer's Research and Care Consortium (TARCC) cohort. Methods. There were 320 participants (Probable AD n = 124, Controls n = 196) in the TARCC Longitudinal Research Cohort available for analysis. Regression analyses were utilized to examine the relation between proinflammatory and anti-inflammatory markers and neuropsychological functioning. Follow-up analyses were conducted separately by case versus control status. Results. Proinflammatory and anti-inflammatory markers were found to be associated with neuropsychological testing. Third tertile proinflammatory markers were negatively associated with measures of attention and language, and anti-inflammatory markers were positively associated with measures of immediate verbal memory and delayed verbal and visual memory. Conclusions. These findings support the link between peripheral inflammatory markers and neuropsychological functioning and suggest the utility of examining profiles of inflammatory markers in the future.
We previously created a serum-based algorithm that yielded excellent diagnostic accuracy in Alzheimer's disease. The current project was designed to refine that algorithm by reducing the number of serum proteins and by including clinical labs. The link between the biomarker risk score and neuropsychological performance was also examined.
Serum-protein multiplex biomarker data from 197 patients diagnosed with Alzheimer's disease and 203 cognitively normal controls from the Texas Alzheimer's Research Consortium were analyzed. The 30 markers identified as the most important from our initial analyses and clinical labs were utilized to create the algorithm.
The 30-protein risk score yielded a sensitivity, specificity, and AUC of 0.88, 0.82, and 0.91, respectively. When combined with demographic data and clinical labs, the algorithm yielded a sensitivity, specificity, and AUC of 0.89, 0.85, and 0.94, respectively. In linear regression models, the biomarker risk score was most strongly related to neuropsychological tests of language and memory.
Our previously published diagnostic algorithm can be restricted to only 30 serum proteins and still retain excellent diagnostic accuracy. Additionally, the revised biomarker risk score is significantly related to neuropsychological test performance.
Algorithm, blood-based; Alzheimer's disease; Diagnosis
The Clinical Dementia Rating Scale Sum of Boxes (CDR-SOB) score is commonly used, although the utility regarding this score in staging dementia severity is not well established.
To investigate the effectiveness of CDRSOB scores in staging dementia severity compared with the global CDR score.
Texas Alzheimer's Research Consortium minimum data set cohort.
A total of 1577 participants (110 controls, 202 patients with mild cognitive impairment, and 1265 patients with probable Alzheimer disease) were available for analysis.
Main Outcome Measures
Receiver operating characteristic curves were generated from a derivation sample to determine optimal cutoff scores and ranges, which were then applied to the validation sample.
Optimal ranges of CDR-SOB scores corresponding to the global CDR scores were 0.5 to 4.0 for a global score of 0.5, 4.5 to 9.0 for a global score of 1.O, 9.5 to 15.5 for a global score of 2.0, and 16.0 to 18.0 for a global score of 3.0. When applied to the validation sample, κ scores ranged from 0.86 to 0.94 (P <.001 for all), with 93.0% of the participants falling within the new staging categories.
The CDR-SOB score compares well with the global CDR score for dementia staging. Owing to the increased range of values, the CDR-SOB score offers several advantages over the global score, including increased utility in tracking changes within and between stages of dementia severity. Interpretive guidelines for CDR-SOB scores are provided.
Our purpose was to study the link between serum brain-derived neurotrophic factor (BDNF) levels and neuropsychological functioning through the Texas Alzheimer's Research Consortium cohort.
A total of 399 participants [probable Alzheimer's disease (AD) n = 198, controls n = 201] were available for analysis. The BDNF levels were assayed via multiplex immunoassay. Regression analyses were utilized to examine the relation between BDNF levels and neuropsychological functioning.
There were no significant mean differences in BDNF levels between cases and controls. In the AD group, the BDNF levels were significantly negatively associated with the scores on immediate [B = −0.07 (0.02), t = −3.55, p = 0.001] and delayed [B = −0.05 (0.02), t = −2.79, p = 0.01] verbal memory and immediate [B = −0.12 (0.05), t = −2.70, p = 0.01] visual memory. No other neuropsychological variables were significantly related to the BDNF levels. The BDNF levels were not significantly related to the neuropsychological test scores in the control group.
Increased serum BDNF levels were associated with poorer visual and verbal memory, but only among AD cases. The current findings point toward an upregulation of serum BDNF as one possible mechanism linked to memory disturbances in AD though it does not appear to be linked to disease severity.
Alzheimer's disease; Biomarkers; Brain-derived neurotrophic factor; Cognition; Neuropsychology; Aging
The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer’s disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cut-off scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer’s disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in 3 of the 4 clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.
symptom validity testing; RBANS; geriatric assessment
There is no rapid and cost effective tool that can be implemented as a front-line screening tool for Alzheimer's disease (AD) at the population level.
To generate and cross-validate a blood-based screener for AD that yields acceptable accuracy across both serum and plasma.
Design, Setting, Participants
Analysis of serum biomarker proteins were conducted on 197 Alzheimer's disease (AD) participants and 199 control participants from the Texas Alzheimer's Research Consortium (TARC) with further analysis conducted on plasma proteins from 112 AD and 52 control participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The full algorithm was derived from a biomarker risk score, clinical lab (glucose, triglycerides, total cholesterol, homocysteine), and demographic (age, gender, education, APOE*E4 status) data.
Major Outcome Measures
11 proteins met our criteria and were utilized for the biomarker risk score. The random forest (RF) biomarker risk score from the TARC serum samples (training set) yielded adequate accuracy in the ADNI plasma sample (training set) (AUC = 0.70, sensitivity (SN) = 0.54 and specificity (SP) = 0.78), which was below that obtained from ADNI cerebral spinal fluid (CSF) analyses (t-tau/Aβ ratio AUC = 0.92). However, the full algorithm yielded excellent accuracy (AUC = 0.88, SN = 0.75, and SP = 0.91). The likelihood ratio of having AD based on a positive test finding (LR+) = 7.03 (SE = 1.17; 95% CI = 4.49–14.47), the likelihood ratio of not having AD based on the algorithm (LR−) = 3.55 (SE = 1.15; 2.22–5.71), and the odds ratio of AD were calculated in the ADNI cohort (OR) = 28.70 (1.55; 95% CI = 11.86–69.47).
It is possible to create a blood-based screening algorithm that works across both serum and plasma that provides a comparable screening accuracy to that obtained from CSF analyses.
C-reactive protein (CRP) is an acute-phase reactant that has been found to be associated with Alzheimer disease (AD) in histo-pathological and longitudinal studies; however, little data exist regarding serum CRP levels in patients with established AD. The current study evaluated CRP levels in 192 patients diagnosed with probable AD (mean age = 75.8 ± 8.2 years; 50% female) as compared to 174 nondemented controls (mean age = 70.6 ± 8.2 years; 63% female). Mean CRP levels were found to be significantly decreased in AD (2.9 µg/mL) versus controls (4.9 µg/mL; P = .003). In adjusted models, elevated CRP significantly predicted poorer (elevated) Clinical Dementia Rating Scale sum of boxes (CDR SB) scores in patients with AD. In controls, CRP was negatively associated with Mini-Mental State Examination (MMSE) scores and positively associated with CDR SB scores. These findings, together with previously published results, are consistent with the hypothesis that midlife elevations in CRP are associated with increased risk of AD development though elevated CRP levels are not useful for prediction in the immediate prodrome years before AD becomes clinically manifest. However, for a subgroup of patients with AD, elevated CRP continues to predict increased dementia severity suggestive of a possible proinflammatory endophenotype in AD.
Alzheimer disease; C-reactive protein; inflammation; treatment; primary prevention
Alzheimer's disease (AD) is the most common form of age-related dementia and one of the most serious health problems in the industrialized world. Biomarker approaches to diagnostics would be more time and cost effective and may also be useful for identifying endophenotypes within AD patient populations.
We analyzed serum protein-based multiplex biomarker data from 197 patients diagnosed with AD and 203 controls from a longitudinal study of Alzheimer's disease being conducted by the Texas Alzheimer's Research Consortium to develop an algorithm that separates AD from controls. The total sample was randomized equally into training and test sets and random forest methods were applied to the training set to create a biomarker risk score.
The biomarker risk score had a sensitivity and specificity of 0.80 and 0.91, respectively and an AUC of 0.91 in detecting AD. When age, gender, education, and APOE status were added to the algorithm, the sensitivity, specificity, and AUC were 0.94, 0.84, and 0.95, respectively.
These initial data suggest that serum protein-based biomarkers can be combined with clinical information to accurately classify AD. Of note, a disproportionate number of inflammatory and vascular markers were weighted most heavily in analyses. Additionally, these markers consistently distinguished cases from controls in SAM, logistic regression and Wilcoxon analyses, suggesting the existence of an inflammatory-related endophenotype of AD that may provide targeted therapeutic opportunities for this subset of patients.
Objectives. Determine the relationship between depressive symptom clusters and neuropsychological test performance in an elderly cohort of cognitively normal controls and mild Alzheimer's disease (AD). Design. Cross-sectional analysis. Setting. Four health science centers in Texas. Participants. 628 elderly individuals (272 diagnosed with mild AD and 356 controls) from ongoing longitudinal study of Alzheimer's disease. Measurements. Standard battery of neuropsychological tests and the 30-item Geriatric Depression Scale with regressions model generated on GDS-30 subscale scores (dysphoria, apathy, meaninglessness and cognitive impairment) as predictors and neuropsychological tests as outcome variables. Follow-up analyses by gender were conducted. Results. For AD, all symptom clusters were related to specific neurocognitive domains; among controls apathy and cognitive impairment were significantly related to neuropsychological functioning. The relationship between performance and symptom clusters was significantly different for males and females in each group. Conclusion. Findings suggest the need to examine disease status and gender when considering the impact of depressive symptoms on cognition.
The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) has demonstrated adequate sensitivity in detecting cognitive impairment in a number of neuropsychiatric conditions, including Alzheimer's disease. However, its ability to detect milder cognitive deficits in the elderly has not been examined. The current study examined the clinical utility of the RBANS by comparing two groups: Patients with Mild Cognitive Impairment (MCI; n = 72) and cognitively intact peers (n = 71). Significant differences were observed on the RBANS Total score, 3 of the 5 Indexes, and 6 of the 12 subtests, with individuals with MCI performing worse than the comparison participants. Specificity was very good, but sensitivity ranged from poor to moderate. Areas under the receiver operating characteristic curves for the RBANS Immediate and Delayed Memory Indexes and the Total Scale score were adequate. Although significant differences were observed between groups and the areas under the curves were adequate, the lower sensitivity values of the RBANS suggests that caution should be used when diagnosing conditions such as MCI.
Mild Cognitive Impairment; Diagnostic accuracy; Repeatable Battery for the Assessment of Neuropsychological Status
Exposure to elements in groundwater (toxic or beneficial) is commonplace yet, outside of lead and mercury, little research has examined the impact of many commonly occurring environmental exposures on mental abilities during the aging process. Inorganic arsenic is a known neurotoxin that has both neurodevelopmental and neurocognitive consequences. The aim of this study was to examine the potential association between current and long-term arsenic exposure and detailed neuropsychological functioning in a sample of rural-dwelling adults and elders. Data were analyzed from 434 participants (133 men and 301 women) of Project FRONTIER, a community-based participatory research study of the epidemiology of health issues of rural-dwelling adults and elders. The results of the study showed that GIS-based groundwater arsenic exposure (current and long-term) was significantly related to poorer scores in language, visuospatial skills, and executive functioning. Additionally, long-term low-level exposure to arsenic was significantly correlated to poorer scores in global cognition, processing speed and immediate memory. The finding of a correlation between arsenic and the domains of executive functioning and memory is of critical importance as these are cognitive domains that reflect the earliest manifestations of Alzheimer’s disease. Additional work is warranted given the population health implications associated with long-term low-level arsenic exposure.
arsenic; chronic exposure; environmental exposure; cognition; rural health
The current search for biomarkers that are diagnostic and/or prognostic of Alzheimer's disease (AD) is of vital importance given the rapidly aging population. It was recently reported that brain derived neurotrophic factor (BDNF) fluctuated according to AD severity, suggesting that BDNF might have utility for diagnostics and monitoring of therapeutic efficacy. The current study sought to examine whether BDNF levels varied according to AD severity, as previously reported.
There were 196 participants (Probable AD n = 98, Controls n = 98) in the Texas Alzheimer's Research Consortium (TARC) Longitudinal Research Cohort available for analysis. BDNF levels were assayed via multiplex immunoassay. Regression analyses were utilized to examine the relation between BDNF levels, MMSE, and CDR scores adjusting for age and gender.
In adjusted models, BDNF levels did not distinguish between AD patients and normal controls and did not significantly predict AD severity or global cognitive functioning.
These findings do not support the notion that BDNF serves as a diagnostic marker for AD or disease severity. It is likely that the most accurate approach to identifying biomarkers of AD will be through an algorithmic approach that combines multiple markers reflective of various pathways.
Alzheimer's disease; Biomarkers; BDNF; Dementia Severity; Clinical Dementia Rating
Although initially developed as a brief dementia battery, the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) has not yet demonstrated its sensitivity, specificity, and positive and negative predictive powers in detecting cognitive impairment in patients with Alzheimer’s disease (AD). Therefore, the current study examined the clinical utility of the RBANS by comparing two age-, education-, and gender-matched groups: patients with AD (n=69) and comparators (n=69). Significant differences (p<0.001) were observed on the RBANS Total score, all five Indexes, and all twelve subtests, with patients performing worse than the comparison participants. An optimal balance between sensitivity and specificity on RBANS scores was obtained when cutoffs of one and one and a half standard deviations below the mean of the comparison sample were implemented. Areas under the Receiver Operating Characteristic curves for all RBANS Indexes were impressive though Immediate and Delayed Memory Indexes were excellent (0.96 and 0.98, respectively). Results suggest that RBANS scores yield excellent estimates of diagnostic accuracy and that the RBANS is a useful screening tool in detection of cognitive deficits associated with AD.
Alzheimer’s disease; dementia; diagnostic accuracy; Repeatable Battery for the Assessment of Neuropsychological Status
To evaluate the utility of Mini-Mental State Examination (MMSE) scores in detecting cognitive dysfunction in a sample of highly educated individuals.
Archival data were reviewed on 4248 participants enrolled in the Mayo Clinic Alzheimer's Disease Research Center (ADRC) and Alzheimer's Disease Patient Registry (ADPR).
1141 primarily Caucasian (93%) individuals with 16 or more years of self-reported education were identified. These included 307 (164 males and 143 females) dementia cases (any type), 176 patients with Mild Cognitive Impairment (106 males and 70 females), and 658 nondemented controls (242 males and 416 females).
Mayo Clinic ADRC and ADPR cohort.
Main Outcome Measures
Diagnostic accuracy estimates (sensitivity, specificity, positive and negative predictive power) of MMSE cut-scores in detecting cognitive dysfunction.
In this sample of highly educated, largely Caucasian older adults, the standard MMSE cut-score of 24 (23 or below) yielded a sensitivity of .66, specificity of .99 and an overall correct classification rate of 89% in detecting dementia. A cut score to 27 (26 or below) resulted in an optimal balance of sensitivity and specificity (.89 and .91, respectively) with an overall correct classification rate of 90%. In a cognitively impaired group (dementia and MCI), a cut-score of 27 (sensitivity = .69, specificity = .91) or 28 (sensitivity and specificity = .78) might be more appropriate.
Elderly patients with college education who present with complaints of cognitive decline (self- or other-report) and score below 27 on the MMSE are at greater risk of being diagnosed with dementia and should be referred for a comprehensive dementia evaluation, including formal neuropsychological testing.
Alzheimer's disease; dementia; Mini-Mental State Examination; diagnosis
Prior animal model and human-based studies have linked selenium concentrations to decreased risk for depression; however, this work has not focused on household groundwater levels or specific depressive symptoms. The current study evaluated the link between groundwater selenium levels and depression. We also sought to determine if a functional polymorphism in the glutathione peroxidase 1 (GPX1) gene impacted this link.
We used a cross-sectional design to analyze data from 585 participants (183 men and 402 women) from Project FRONTIER, a study of rural health in West Texas. Residential selenium concentrations were estimated using Geospatial Information System (GIS) analyses. Linear regression models were created using Geriatric Depression Scale (GDS-30) total and subfactor scores as outcome variables and selenium concentrations as predictor variables. Analyses were re-run after stratification of the sample on GPX1 Pro198Leu genotype (rs1050454).
Selenium levels were significantly and negatively related to all GDS and subfactor scores accounting for up to 17% of the variance beyond covariates. Selenium was most strongly protective against depression among homozygous carriers of the C allele at the Pro198Leu polymorphism of the GPX1 gene. Analyses also point towards a gene-environmental interaction between selenium exposure and GPX1 polymorphism.
Our results support the link between groundwater selenium levels and decreased depression symptoms. These findings also highlight the need to consider the genetics of the glutathione peroxidase system when examining this relationship, as variation in the GPX1 gene is related to depression risk and significantly influences the protective impact of selenium, which is indicative of a gene-environment interaction.
Aging; Depression; Environmental factors; Selenium; GPX1
Granulocyte colony-stimulating factor (G-CSF) promotes the survival and function of neutrophils. G-CSF is also a neurotrophic factor, increasing neuroplasticity and suppressing apoptosis.
We analyzed G-CSF levels in 197 patients with probable Alzheimer's disease (AD) and 203 cognitively normal controls (NCs) from a longitudinal study by the Texas Alzheimer's Research and Care Consortium (TARCC). Data were analyzed by regression with adjustment for age, education, gender and APOE4 status.
Serum G-CSF was significantly lower in AD patients than in NCs (β = −0.073; p = 0.008). However, among AD patients, higher serum G-CSF was significantly associated with increased disease severity, as indicated by lower Mini-Mental State Examination scores (β = −0.178; p = 0.014) and higher scores on the global Clinical Dementia Rating (CDR) scale (β = 0.170; p = 0.018) and CDR Sum of Boxes (β = 0.157; p = 0.035).
G-CSF appears to have a complex relationship with AD pathogenesis and may reflect different pathophysiologic processes at different illness stages.
Granulocyte colony-stimulating factor; Alzheimer's disease; Inflammation; Serum proteins; Mini-Mental State Examination; Clinical Dementia Rating-Sum of Boxes