Although delirium is a common medical comorbidity with altered cognition as its defining feature, few publications have addressed the neuropsychological prodrome, profile, and recovery of patients tested during delirium. We characterize neuropsychological performance in 54 hemapoietic stem cell/bone marrow transplantation (BMT) patients shortly before, during, and after delirium and in BMT patients without delirium and 10 healthy adults. Patients were assessed prospectively before and after transplantation using a brief battery. BMT patients with delirium performed more poorly than comparisons and those without delirium on cross-sectional and trend analyses. Deficits were in expected areas of attention and memory, but also in psychomotor speed and learning. The patients with delirium did not return to normative “average” on any test during observation. Most tests showed a mild decline in the visit before delirium, a sharp decline with delirium onset, and variable performance in the following days. This study adds to the few investigations of neuropsychological performance surrounding delirium and provides targets for monitoring and early detection; Trails A and B, RBANS Coding, and List Recall may be useful for delirium assessment.
Bone marrow transplantation; Cognition; Cancer; Attention; Delirium
The estimation of premorbid abilities is an essential part of a neuropsychological evaluation, especially in neurodegenerative conditions. Although word pronunciation tests are one standard method for estimating the premorbid level, research suggests that these tests may not be valid in neurodegenerative diseases. Therefore, the current study sought to examine two estimates of premorbid intellect, the Wide Range Achievement Test (WRAT) Reading subtest and the Barona formula, in 93 patients with mild to moderate Huntington's disease (HD) to determine their utility and to investigate how these measures relate to signs and symptoms of disease progression. In 89% of participants, WRAT estimates were below the Barona estimates. WRAT estimates were related to worsening memory and motor functioning, whereas the Barona estimates had weaker relationships. Neither estimate was related to depression or functional capacity. Irregular word reading tests appear to decline with HD progression, whereas estimation methods based on demographic factors may be more robust but overestimate premorbid functioning.
Huntington's disease; movement disorders; basal ganglia; assessment; dementia
Progression of HIV/AIDS is frequently associated with frontal/subcortical dysfunction and mean reaction time (RT) slowing. Beyond group means, within-subject variability of RT has been found to be particularly sensitive to frontal/subcortical dysfunction in other populations. However, the possible relevance of RT variability to HIV/AIDS patients remains unknown. This study evaluated the relationships between RT variability and indicators such as neurocognitive, behavioral, and immunological status. A total of 46 HIV-positive adults on antiretroviral medication regimens were included in this study. Overall performance of this sample was poorer than normative means on measures of RT latency, RT variability, and traditional neurocognitive domains. Results demonstrated that the measures of RT variability were associated with global cognition, medication adherence rates, and peak immunological dysfunction, above and beyond the effects of RT latency. These preliminary findings suggest that measures of RT variability may provide enhanced sensitivity to neurocognitive disease burden in HIV/AIDS relative to more traditional measures of mean RT or cognitive function.
AIDS; Cognitive ability; Medication adherence; Immunological status; Continuous Performance Test; CPT-II
Recent studies suggest that older human immunodeficiency virus (HIV)-infected adults are at particular risk for HIV-associated neurocognitive disorders (HAND), including dementia. Deficits in attention/working memory are posited to play a central role in the development of HAND among older adults. The aim of the present study was to examine the possible protective benefits of spontaneous strategy use during a visual working memory task in 46 older and 42 younger adults infected with HIV. Results revealed a significant interaction between age and strategy use, with older adults who used a meta-cognitive strategy demonstrating superior working memory performance versus non-strategy users. This effect was not observed in the younger HIV-infected sample and was not better explained by possible confounding factors, such as education, comorbid medical conditions, or HIV disease severity. Within the older group, strategy use was associated with better executive functions and higher estimated verbal intelligence. Findings from this study suggest that working memory declines in older HIV-infected adults are moderated by the use of higher-level mnemonic strategies and may inform cognitive neurorehabilitation efforts to improve cognitive and everyday functioning outcomes in older persons living with HIV infection.
Human immunodeficiency virus; Working memory; Aging; Strategies; Neuropsychology
The CLOX is a clock drawing test used to screen for cognitive impairment in older adults, but there is limited normative data for this measure. This study presents normative data for the CLOX derived from a diverse sample of 585 community-dwelling older adults with complete cognitive data at baseline and 4-year follow-up. Participants with evidence of baseline impairment or substantial 4-year decline on the Mini-Mental State Examination were excluded from the normative sample. Spontaneous clock drawing (CLOX1) and copy (CLOX2) performances were stratified by age group and reading ability from the Wide Range Achievement Test, 3rd edition (WRAT-3). Lowest mean CLOX scores were observed for the oldest age group (75+ years old) with the lowest WRAT-3 reading scores. For all groups, average scores were higher for CLOX2 than CLOX1. These normative data may be helpful to clinicians and researchers for interpreting CLOX performance in older adults with diverse levels of reading ability.
Normative data; Clock drawing test; Reading ability; Older adults; Aging
Research in Attention-Deficit/Hyperactivity Disorder (ADHD) generally utilizes clinical samples or children with comorbid psychiatric diagnoses. Findings indicated that children with ADHD experience academic underachievement and poor performance on measures of response inhibition (RI). Less is known, about the neuropsychological profile of typically developing children with ADHD. The aim of the current study was twofold: (1) determine if academic skills and RI were impaired in typically developing children with ADHD-combined subtype (ADHD-C) and (2) determine to what extent RI may predict academic abilities. Children with ADHD-C did not differ on any academic domain from controls. Children with ADHD-C performed more poorly than controls on RI measures. Regression analyses suggest that Written Expression ability was significantly influenced by RI. No other academic domain was related to RI. Results suggest that children with ADHD-C may experience impairments in RI despite adequate academic functioning. Impaired RI is not solely responsible for difficulties found in academic skills in ADHD-C.
ADHD; Achievement; Executive functions; Response inhibition
The majority of research on neurobehavioral functioning among children with Attention-Deficit/Hyperactivity Disorder (ADHD) is based on samples comprised primarily (or exclusively) of boys. Although functional impairment is well established, available research has yet to specify a neuropsychological profile distinct to girls with ADHD. The purpose of this study was to examine performance within four components of executive function (EF) in contemporaneously recruited samples of girls and boys with ADHD. Fifty-six children with ADHD (26 girls) and 90 controls (42 girls), ages 8–13, were administered neuropsychological tests emphasizing response inhibition, response preparation, working memory, and planning/shifting. There were no significant differences in age or SES between boys or girls with ADHD or their sex-matched controls; ADHD subtype distribution did not differ by sex. Compared with controls, children with ADHD showed significant deficits on all four EF components. Girls and boys with ADHD showed similar patterns of deficit on tasks involving response preparation and working memory; however, they manifested different patterns of executive dysfunction on tasks related to response inhibition and planning. Girls with ADHD showed elevated motor overflow, while boys with ADHD showed greater impairment during conscious, effortful response inhibition. Girls, but not boys with ADHD, showed impairment in planning. There were no differences between ADHD subtypes on any EF component. These findings highlight the importance of studying boys and girls separately (as well as together) when considering manifestations of executive dysfunction in ADHD.
Attention; Response control; Working memory; Inhibition; Planning; Childhood; Development
The Dementia Rating Scale (DRS) is a widely used measure of global cognition, with age- and education-corrected norms derived from a cross-sectional sample of adults participating in Mayo's Older Americans Normative Studies (MOANS). In recent years, however, studies have indicated that cross-sectional normative samples of older adults represent an admixture of individuals who are indeed cognitively normal (i.e., disease-free) and individuals with incipient neurodegenerative disease. Theoretically, the “contamination” of cross-sectional normative samples with cases of preclinical dementia can lead to underestimation of the test mean and overestimation of the variance, thus reducing the clinical utility of the norms. Robust norming, in which dementia cases are removed from the normative cohort through longitudinal follow-up, is an alternative approach to norm development. The current study presents a reappraisal of the original MOANS DRS norms, provides robust and expanded norms based on a sample of 894 adults age 55 and over, and critically evaluates the benefits of robust norming.
Dementia Rating Scale; DRS; Alzheimer's disease; Robust; Norms
Differences in the memory characteristics of patients with Alzheimer's disease (AD), Huntington's disease (HD), and Parkinson's disease (PD) were investigated with tests that assess learning and retention of words, line-drawn objects, and locations. Large groups of AD, HD, and PD patients were administered the Hopkins Verbal Learning Test-Revised (HVLT-R) and the Hopkins Board (HB). Eight learning and memory measures were subjected to discriminant function analysis. A 91% classification accuracy was achieved for the separation of cortical and subcortical dementias and 79% accuracy for the discrimination of the three groups. The delayed recall of items was the best discriminator. Receiver-operating curve analysis indicated up to 90% sensitivity and 90% specificity in differentiating the three diseases using the discriminant scores. Individual learning and memory measures of the HVLT-R and the HB provided very high sensitivity and specificity in distinguishing cortical versus subcortical dementias and modest accuracy in separating the two subcortical diseases.
Episodic memory; Alzheimer's disease; Huntington's disease; Parkinson's disease
The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) has demonstrated adequate sensitivity in detecting cognitive impairment in a number of neuropsychiatric conditions, including Alzheimer's disease. However, its ability to detect milder cognitive deficits in the elderly has not been examined. The current study examined the clinical utility of the RBANS by comparing two groups: Patients with Mild Cognitive Impairment (MCI; n = 72) and cognitively intact peers (n = 71). Significant differences were observed on the RBANS Total score, 3 of the 5 Indexes, and 6 of the 12 subtests, with individuals with MCI performing worse than the comparison participants. Specificity was very good, but sensitivity ranged from poor to moderate. Areas under the receiver operating characteristic curves for the RBANS Immediate and Delayed Memory Indexes and the Total Scale score were adequate. Although significant differences were observed between groups and the areas under the curves were adequate, the lower sensitivity values of the RBANS suggests that caution should be used when diagnosing conditions such as MCI.
Mild Cognitive Impairment; Diagnostic accuracy; Repeatable Battery for the Assessment of Neuropsychological Status
In the 19th century, Hughlings Jackson relied on clinical history, seizure semiology, and the neurologic examination as methods for seizure localization to inform the first epilepsy surgeries. In the 20th century, psychological and neuropsychological tests were first employed as both diagnostic and prognostic measures. The contemporary practice of epilepsy evaluation and management includes neuropsychology as a critical component of epilepsy care and research, and epilepsy and neuropsychology have enjoyed a very special and synergistic relationship. This paper reviews how epilepsy has shaped the practice of neuropsychology as a clinical service by asking critical questions that only neuropsychologists were in a position to answer, and how clinical care of epilepsy patients has been significantly improved based on neuropsychology's unique contributions.
The oral version of the Trail Making Test (OTMT) is a neuropsychological measure that provides an assessment of sequential set-shifting without the motor and visual demands of the written TMT (WTMT). Originally purposed to serve as an oral analog of the WTMT, the OTMT provides a means to evaluate patients with physical restrictions. However, formal validity studies and available normative data remain sparse. In a sample of healthy adults (n = 81), a strong correlation was observed between OTMT-B and its written counterpart (r = .62), but the correlations were weak between OTMT-A and either written version of the TMT. OTMT-B was significantly correlated with age but not with education or gender, whereas OTMT-A was not significantly correlated with demographic factors. The WTMT to OTMT ratios observed in the current study were generally lower than previously reported and varied across age groups, suggesting that the recommended use of a uniform conversion factor to predict one performance based on the other should be cautiously undertaken. Normative data that have been stratified by age are provided as well as suggestions for using both versions of the TMT in tandem to better elucidate the nature of cognitive deficits and to aid in the localization of cerebral dysfunction.
Neuropsychology; Normative data; Cognitive tests; Trail Making Tests; Neuropsychological assessment
Attention deficit hyperactivity disorder (ADHD) is associated with deficits in executive functioning (EF). ADHD in adults is also associated with impairments in major life activities, particularly occupational functioning. We investigated the extent to which EF deficits assessed by both tests and self-ratings contributed to the degree of impairment in 11 measures involving self-reported occupational problems, employer reported workplace adjustment, and clinician rated occupational adjustment. Three groups of adults were recruited as a function of their severity of ADHD: ADHD diagnosis (n = 146), clinical controls self-referring for ADHD but not diagnosed with it (n = 97), and community controls (n = 109). Groups were combined and regression analyses revealed that self-ratings of EF were significantly predictive of impairments in all 11 measures of occupational adjustment. Although several tests of EF also did so, they contributed substantially less than did the EF ratings, particularly when analyzed jointly with the ratings. We conclude that EF deficits contribute to the impairments in occupational functioning that occur in conjunction with adult ADHD. Ratings of EF in daily life contribute more to such impairments than do EF tests, perhaps because, as we hypothesize, each assesses a different level in the hierarchical organization of EF as a meta-construct.
Adult ADHD; Executive functioning (EF); EF ratings; EF tests; Occupational impairment
Assessing cognitive change in older adults is a common use of neuropsychological services, and neuropsychologists have utilized several strategies to determine if a change is “real,” “reliable,” and “meaningful.” Although standardized regression-based (SRB) prediction formulas may be useful in determining change, SRBs have not been widely applied to older adults. The current study sought to develop SRB formulas on a group of 127 community-dwelling older adults for several widely used neuropsychological measures. In addition to baseline test scores and demographic information, the current study also examined the role of short-term practice effects in predicting test scores after 1 year. Consistent with prior research on younger adults, baseline test performances were the strongest predictors of future test performances, accounting for 25%–58% of the variance. Short-term practice effects significantly added to the predictability of all nine of the cognitive tests examined (3%–22%). Future studies should continue extending SRB methodology for older adults, and the inclusion of practice effects appears to add to the prediction of future cognition.
Predicting cognition; Practice effects