|Home | About | Journals | Submit | Contact Us | Français|
Practice effects are improvements in cognitive test performance associated with repeated administrations of same or similar measures and are traditionally seen as error variance. However, there is growing evidence that practice effects provide clinically useful information.
Within session practice effects (WISPE) across 2 hours were collected on sixty-one non-consecutive patients referred for suspected dementia and compared to the Mini Mental Status Examination (MMSE), a screening measure of dementia severity.
In all patients, WISPE on two cognitive measures were significantly correlated with MMSE, even after controlling for baseline cognitive scores (partial r=0.47, p<0.001; partial r=0.26, p=0.046). In patients diagnosed with probable Alzheimer’s disease, the trend was even stronger (partial r=0.72, p<0.01; partial r=0.58, p=0.046). In both groups, lower WISPE were associated with lower MMSE scores (i.e., greater dementia severity), even after controlling for initial cognitive scores.
If future research validates these findings with longitudinal studies, then WISPE may have important clinical applications in dementia evaluations.
Practice effects are improvements in cognitive test performance associated with multiple factors, including repeated administrations of the same or similar measures . Traditionally, these improvements following re-administration of a test are considered systematic error variance that needs to be controlled. However, there is growing evidence that practice effects provide clinically useful information, especially in older adults with memory impairments. For example, practice effects seem to provide diagnostic information that separate intact elders from those with milder cognitive impairments. In these cases, intact individuals display the expected practice effects on retesting, whereas those with cognitive impairments show diminished practice effects [2–7]. Prognostically, practice effects across shorter retest intervals have been shown to predict cognitive outcomes across longer intervals in three different neuropsychiatric samples . In a study focusing on amnestic Mild Cognitive Impairment, practice effects across one week also predicted cognitive performance after one year, above and beyond baseline cognitive performance . Practice effects may serve as a proxy of neural integrity, and when individuals stop benefitting from prior experiences (i.e., decreased practice effects), there functioning declines.
Practice effects have also been linked to treatment response. In one study, individuals with higher practice effects showed a better response to a memory training course than those with lower practice effects . Learning potential is a construct similar to practice effects, in which within-session training on a cognitive test provides evidence of cognitive plasticity. Learning potential has been shown to predict training outcomes in older adults  and patients with schizophrenia [12–14]. These studies seem to suggest that practice effects (or learning potential) can be used to identify those who might benefit from cognitive intervention, which could more appropriately utilize limited resources.
Although a number of studies have examined the prognostic value of practice effects in healthy elders and those with mild cognitive impairments, few have examined these clinical benefits of repeated testing among patients with dementia. The current study examined the relationship between within session practice effects (WISPE) and current global functioning, a reasonable prognostic indicator in patients referred to a dementia clinic. We expected WISPE scores to be positively associated with current global functioning, with smaller practice effects correlating with lower global cognition, especially in amnestic conditions (e.g., amnestic Mild Cognitive Impairment, Alzheimer’s disease).
Sixty-one non-consecutive patients referred to a Cognitive Disorders Clinic for an evaluation for suspected dementia provided data for the current study. Their mean age was 73.3 (7.8) years and their mean education was 14.7 (2.8) years. The sample included slightly more females (56%). Their premorbid intellect tended to be average (Test of Premorbid Functioning/Wechsler Test of Adult Reading = 42nd percentile, range = <1st – 91st percentiles), but their current global cognition was borderline impaired (Mini Mental Status Examination [MMSE] = 25.5 [3.5], range = 13 – 30). On average, they reported only minimal depressive symptoms (30-item Geriatric Depression Scale = 7.9 [5.9]). Following a thorough evaluation (described below), their diagnoses included: Mild Cognitive Impairment (42%), probable Alzheimer’s disease (21%), Vascular Cognitive Impairment/vascular dementia (10%), frontotemporal dementia (8%), depression (5%), and other (13%). The only inclusion criterion was that patients were able to complete the initial and repeated testing. No additional exclusion criteria were employed.
All procedures, including the use of de-identified patient data, were approved by the local Institutional Review Board prior to the study’s commencement. All patients referred to the dementia clinic underwent a thorough evaluation by a board-certified neurologist with specialty training in dementia diagnosis and care. Each evaluation included: initial clinical interview with patient and collateral (if possible), mental status examination, review of systems, physical and neurological examination, lab work, magnetic resonance imaging, neuropsychological testing, positron emission tomography (if necessary to aid in differential diagnosis), and a second clinical visit to provide diagnostic impressions and treatment recommendations.
As part of a larger neuropsychological battery, all patients were administered the MMSE , a widely-used, 30-point measure of global cognition and dementia severity. The total raw score from the MMSE was used. Two other cognitive measures used as part of the neuropsychological evaluation were the Hopkins Verbal Learning Test – Revised (HVLT-R)  and the Coding subtest of the Wechsler Intelligence Scale – III/IV [17,18]. In the HVLT-R, participants are presented with 12 words to recall across three successive learning trials. The total number of words recalled across all three learning trials was used. In the Coding subtest, participants have 120 seconds to use a reference key to pair as many numeric digits with corresponding geometric figures. The number of correctly paired items was used. The HVLT-R and Coding subtest were given twice during the neuropsychological evaluation, once in the beginning of the battery and once at the end (typically separated by approximately 2 hours). All neuropsychological testing was completed in a single session. Alternate test forms were not used for the HVLT-R or Coding.
To calculate WISPE, the score from the initial administration of the HVLT-R and Coding were subtracted from their respective repeated administration of the same measure (e.g., repeated HVLT-R – initial HVLT-R). The WISPE on the HVLT-R was correlated with the MMSE, after controlling for initial scores of the HVLT-R via partial correlations. Similarly, the WISPE on Coding was correlated with the MMSE, after controlling for initial scores of Coding via partial correlations. Given the limited number of analyses, alpha was set at 0.05.
Relevant cognitive test scores are presented in the Table. For the total sample, WISPE on the HVLT-R significantly correlated with the MMSE, after controlling for scores on the initial administration of the HVLT-R (partial r=0.47, p<0.001, d=1.06). The Coding WISPE also significantly correlated with the MMSE, after controlling for initial Coding scores (partial r=0.26, p=0.046, d=0.54). When only those participants diagnosed with probable Alzheimer’s disease were considered (n=13), the HVLT-R WISPE continued to be significantly correlated with the MMSE (partial r=0.72, p=0.009, d=2.06), as did the Coding WISPE (partial r=0.58, p=0.046, d=1.44), even after their respective initial administration scores were partialled out. When only those diagnosed with Mild Cognitive Impairment were considered (n=26), there was a significant partial correlation between Coding WISPE and the MMSE (partial r=0.45, p=0.029, d=0.99), but HVLT-R WISPE only trended in the expected direction (partial r=0.37, p=0.073, d=0.80). In all instances, lower WISPE were associated with lower MMSE scores, even after controlling for initial cognitive scores. The relationships between WISPE on the HVLT-R and MMSE for each group are presented in the Figure.
Although practice effects have been previously examined in patients with Mild Cognitive Impairment and dementia [19–22], few of these studies have considered the valuable clinical information that practice effects could provide. Recent research, however, suggests that practice effects may provide clinically useful information about diagnosis, prognosis, and treatment response in older adults with memory impairments [3–5,7,9–11]. Results of the current study extend these prior findings by demonstrating that practice effects provide information about the severity of cognitive impairment and dementia in these older subjects. In our sample of patients referred for a dementia evaluation, WISPE on two cognitive measures were positively partially correlated with a measure of global cognition and dementia severity. Lower WISPE were associated with worse cognition/greater dementia severity. As we continue to follow these patients, it will be interesting to see if these baseline practice effects are also associated with future cognition (i.e., cognitive trajectories). It should be noted, however, that the correlations between WISPE and MMSE were modest, with much additional variance unaccounted for.
Not only were the WISPE related to overall cognition in the entire sample, but subgroups of patients demonstrated relatively consistent results (see Figure). For example, among those patients diagnosed with probable Alzheimer’s disease, the relationship between WISPE and global cognition was stronger than for the entire sample. Less striking trends were observed in the subgroup of cases with Mild Cognitive Impairment. Even in the remainder of the participants, which included a range of diagnoses (e.g., vascular dementia, frontotemporal dementia, depression), WISPE on the HVLT-R tended to be related to MMSE scores (partial r=0.33, p=0.14, d=0.69). These subgroup analyses suggest that practice effects provide valuable clinical information for a range of dementia-related conditions [8,23].
Our subgroup analyses are also largely consistent with existing literature on practice effects across the range of dementia. In two studies by Cooper and colleagues [19,20], 17 – 31% of patients with Mild Cognitive Impairment or Alzheimer’s disease displayed significant improvements on repeat testing (i.e., practice effects). Duff et al.  reported that nearly 50% of their participants with amnestic Mild Cognitive Impairment displayed practice effects. As seen in the Figure, approximately 30% of patients diagnosed with probable Alzheimer’s disease showed positive practice effects (i.e., improvement on repeated testing) and 46% of patients with Mild Cognitive Impairment showed these practice effects. This accumulating evidence suggests that learning through practice is possible even in patients with severe cognitive disturbances, which could have implications for their diagnosis and treatment.
As reported in the Table, there is a surprising amount of variability in the WISPE scores in this dementia referral sample. For example, the standard deviations of the WISPE variables are quite large (even larger than the mean improvements on repeat testing). These large standard deviations, along with the partial correlations mentioned above, suggest that practice effects are not uniform across all patients, and they could serve as individual difference variables to enrich samples for clinical trials. For example, patients that do not demonstrate practice effects might be preferentially selected for clinical trials, as they may be progressing more quickly. Additionally, approximately one quarter of all cases showed worse scores on repeated testing within the same session (HVLT-R = 20%, Coding = 31%). Conversely, in a study of community-dwelling and cognitively intact seniors , no participants showed within-session declines on a similar list learning test and only 8% showed declines on a similar coding task. These data continue to support the idea that an absence of practice effects is a poor indicator of current and future cognition.
Despite the growing body of literature to support the clinical value of practice effects, there are some limitations of the current study. First, the MMSE is a gross measure of cognition and dementia, and more sensitive methods could have been used. However, even with this bedside screening instrument, significant findings were observed. Second, data on non-consecutive patients were collected. Only those clinical cases that were thought to be able to complete the initial and repeated testing were funneled into this study. So it is possible that more impaired patients (e.g., MMSE<13) might have a different (or non-existent) relationship between practice effects and global cognition. Third, the current study examined the relationship between practice effects and current global cognition, when future cognition is probably more relevant. While we see this study as a first step in extracting useful clinical information from practice effects (e.g., dementia severity), we are continuing to follow these patients across time to more accurately determine the prognostic value of practice effects in dementia. Finally, although our interpretation of the findings support our hypothesis, other explanations may also be possible. For example, it is possible that practice effects reflect factors other than learning, such as cognitive fluctuations , examiner-examinee relationships, or patient fatigue .
The project described was supported by research grants from the National Institutes on Aging: K23 AG028417-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Aging or the National Institutes of Health.
Conflict of Interest: None.
Description of authors’ roles: K. Duff was involved in formulating the research question, designing the study, carrying it out, analyzing the data, and writing the article. G. Chelune was involved in formulating the research question, designing the study, analyzing the data, and writing the article. K. Dennett was involved in formulating carrying out the study, analyzing the data, and writing the article.