|Home | About | Journals | Submit | Contact Us | Français|
To identify, characterize and compare the frequency of Mild Cognitive Impairment (MCI) subtypes at baseline in a large, late-life cohort (N=3,063) recruited into a dementia prevention trial.
A retrospective, data-algorithmic approach was used to classify participants as cognitively normal or MCI with corresponding subtype (e.g., amnestic vs. non-amnestic, single domain vs. multiple domain) based on a comprehensive battery of neuropsychological test scores, with and without Clinical Dementia Rating (CDR) global score included in the algorithm.
Overall, 15.7% of cases (n=480) were classified as MCI. Amnestic MCI was characterized as unilateral memory impairment (i.e., only verbal or only visual memory impaired) or bilateral memory impairment (i.e., both verbal and visual memory impaired). All forms of amnestic MCI were almost twice as frequent as non-amnestic MCI (10.0% vs. 5.7%). Removing the CDR = 0.5 (“questionable dementia”) criterion resulted in a near doubling of the overall MCI frequency to 28.1%.
Combining CDR and cognitive test data to classify participants as MCI resulted in overall MCI and amnestic MCI frequencies consistent with other large community-based studies, most of which relied on the “gold standard” of individual case review and diagnostic consensus. The present data-driven approach may prove to be an effective alternative for use in future large-scale dementia prevention trials.
Mild Cognitive Impairment (MCI) is an intermediate state between normal cognitive functioning and dementia (Petersen et al., 1999). The past decade has witnessed an exponential increase in research activity focused on MCI (Petersen & O’Brien, 2006), a critical goal being further refinement and identification of a pre-dementia state of Alzheimer’s disease (AD) or other dementing disorders which could serve as an early clinical target for intervention and therapy development.
However, the MCI construct is not without criticism and debate regarding its clinical utility. At issue is that much of the research validation of MCI has stemmed from specific clinical settings, typically memory disorders specialty clinics. Longitudinal, population-based studies of MCI have resulted in generally lower and much more variable prevalence estimates and rates of progression to dementia (Fisk & Rockwood, 2005; Ganguli et al., 2004; Ritchie et al., 2001). Of particular concern is the issue of stability of MCI in population-based studies, as the rates of MCI subjects who “revert to normal” over follow-up are surprisingly high (e.g., 25-40%; (Ganguli et al., 2004; Larrieu et al., 2002; Palmer et al., 2002; Ritchie et al., 2001). Differences among studies in proportions/prevalence, conversion rates, and “stability “of MCI are clearly influenced by important methodological differences such as study setting, subject selection, age, MCI definition and assessment approach.
Earlier emphasis on isolated episodic memory impairment (Petersen et al., 1999) has evolved into recognition of multiple presentations and etiologies beyond just AD. The International Working Group on MCI (Winblad et al., 2004) proposed an MCI classification scheme which is in wide use today, including adaptation by the National Alzheimer’s Coordinating Center for its Uniform Data Set among the 30 National Institute on Aging (NIA)-funded Alzheimer’s Disease Centers across the country. The classification scheme includes decision branches for amnestic MCI (a-MCI) vs. non-amnestic MCI (na-MCI; e.g., language, attention, executive function or visuospatial impairment), as well as for single cognitive domain MCI (MCI-s) vs. multiple domain MCI (MCI-md). The classification scheme theoretically reflects multiple etiologies underlying mild cognitive impairments of aging, such that a-MCI-s may represent a prodromal form of AD, a-MCI-md may reflect incipient AD and/or more vascular etiologies, na-MCI-s may represent prodromal frontotemporal dementia, static vascular damage, etc., and na-MCI-md may evolve into vascular dementia, dementia of Lewy body type (DLB), etc. However, evidence to date for predictive validity of MCI subtypes with longitudinal outcome data is equivocal (Busse et al., 2006; Fischer et al., 2007; Lopez et al., 2007; Ravaglia et al., 2006; Tabert et al., 2006; Zanetti et al., 2006).
Moreover, there is a relative paucity of descriptive data from large community-based cohorts to characterize the frequency with which specific MCI subtypes occur. Lopez et al. (Lopez et al., 2003a) reported prevalence of 6 % for MCI Amnestic Type and 16 % for MCI Multiple Cognitive Deficits Type (including cases with memory impairment) from the Cardiovascular Health Study (CHS), but without further sub-typing. Busse et al. (Busse et al., 2006) reported baseline prevalence rates of the four basic MCI subtypes from the Leipzig Longitudinal Study of the Aged and found the highest for na-MCI-s (5 to 17 %) with no statistically significant differences between a-MCI and na-MCI in estimated prevalence rates. Manly et al. (Manly et al., 2005) employed a comprehensive neuropsychological battery and reported a higher frequency of na-MCI (17%) compared to a-MCI (11%) among ethnically diverse northern Manhattan elders. The estimated prevalence for isolated memory impairment was 5 %. Thus the frequency of MCI subtypes reported in population cohorts is quite variable, although evidence to date indicates that single domain amnestic MCI, long the focus of clinic-based studies (Petersen et al., 1999), is consistently among the least prevalent cognitive profiles of MCI in the community.
It is therefore of great interest to further characterize specific cognitive profiles in large community-based cohorts, using comprehensive cognitive testing across multiple cognitive domains. For example, the frequency of verbal-only vs. visual-only vs. bilateral memory impairment in a-MCI is not usually reported and may be useful in further refining prediction of AD progression. Specific patterns of non-memory test impairment within a-MCI-md may be important to delineate, as there is increasing evidence that memory deficits plus secondary deficits, particularly in executive functions, are much more predictive of progression than episodic memory deficits alone (Bozoki et al., 2001; Tabert et al., 2006). Finally, it is important to identify and characterize non-amnestic forms of MCI, most prevalent in population cohorts, so that their clinical course can be followed and better understood, and appropriate interventions initiated as indicated.
A further relevant issue in identifying MCI in community cohorts is the respective roles of the clinical interview vs. formal psychometric testing. The Clinical Dementia Rating (CDR) scale is the most widely used interview instrument in research settings to rate individuals along a continuum from normal to severe dementia, and is based on reports of functional change from the subject and a knowledgeable informant (Morris, 1993). Proponents of the CDR propose that, when used correctly, informant-based report of intra-individual change in daily functioning is a more sensitive predictor of dementia than cognitive testing (Morris et al., 2001). How adequately the meaning of CDR scores generalize from clinical settings to community settings, however, remains a critical question (Meguro et al., 2004; Petersen, 2004).
The aims of this paper are to identify, characterize and compare the frequency of MCI subtypes in a large, late-life cohort recruited into a dementia prevention trial (Ginkgo Evaluation of Memory, GEM, Study (DeKosky & al., 2006)). The GEM study assessed detailed cognitive and functional status at study entry, using a broad neuropsychological test battery and administration of the CDR with both participant and a collateral source (e.g., family member, close friend, etc.). An important long-term goal is to establish predictive validity of these assessment and classification methods for future dementia prevention trials.
The design of the GEM study has been reported (DeKosky & al., 2006). Briefly, participants were recruited from four clinical sites, University of Pittsburgh, University of California-Davis, The Johns Hopkins University and Wake Forest University. Other centers involved in the administration and conduct of the study are a data coordinating center, a clinical coordinating center, a blood laboratory/repository and a diagnostic center for evaluation of dementia. The Data Coordinating Center at the University of Washington, Department of Biostatistics handles computer system design, data management and statistical analyses. The Clinical Coordinating Center, centered at Wake Forest University, oversees clinical operations, adherence and retention. Blood samples and DNA are sent to the Central Blood Laboratory at the University of Vermont for analysis and storage. The Cognitive Diagnostic Center is located at the University of Pittsburgh and has oversight of cognitive screening and entry of cases into the study, establishing the neuropsychological test battery, reviewing all evaluation data on suspected cognitively declined cases, and final adjudication of diagnosis.
Detailed information regarding recruitment of GEM participants has been published (Fitzpatrick et al., 2006). Recruitment began in September of 2000 and was completed in May of 2002. The goal of GEM was to include individuals with normal cognition or MCI but to exclude those with dementia. Inclusion criteria included English as the primary language and age 75 years and above. Participants with neurological or neurodegenerative diseases that by themselves could significantly affect cognition, or participants who carried a higher risk of dementia (e.g. Parkinson’s disease) were excluded, as were those hospitalized for depression in the past year. Similarly, anyone on cognitive enhancing medications or treatments for Alzheimer’s disease (AD) (cholinesterase inhibitors or glutamate receptor modulators) was also excluded, as were individuals who would not agree to restrict their vitamin E intake to 400 mg/day. Use of over-the-counter Ginkgo biloba was also an exclusion, but the use of other herbal products was permitted. Because of the questions concerning increased bleeding risks associated with Ginkgo biloba (Ang-Lee et al., 2001; Rosenblatt & Mindel, 1997) individuals with bleeding disorders, thrombocytopenia, those taking anticoagulants, or with other similar risks were excluded. A critical inclusion criterion was the requirement that each participant identify an individual willing to serve as a proxy, someone who had direct or phone contact with the participant on average at least ten hours per week. This person served as the proxy for the CDR.
A total of 3,072 participants were recruited into the GEM study. A variety of recruitment approaches were used (Fitzpatrick et al., 2006). The vast majority of participants were recruited via mass mailings based on voter registration lists, commercially purchased mailing lists, and university lists. Less than eight percent of participants had responded to media advertisements. Three participants were subsequently excluded because of protocol violations which were not detected at baseline, resulting in a final baseline N of 3,069.
This study was approved by the University of Pittsburgh Institutional Review Board and was conducted in accordance with the Helsinki Declaration (http://www.wma.net/e/policy/17-c_e/html). Recruitment and screening were done in three phases: phone, clinic screening, and baseline assessment, with cognitive, medical and other exclusion criteria performed at each stage. The initial telephone interview included a self-reported medical history and the Telephone Interview for Cognitive Assessment (TICS) (Brandt et al., 1988) using a exclusionary cut-off score of 28 or below. Individuals meeting criteria at this stage were scheduled for a clinic screening visit, during which all participants and their proxies completed the informed consent process. The screening visit included administration of the Modified Mini Mental State examination (3MSE) (Teng & Chui, 1987), the Center for Epidemiological Studies Depression (CES-D) scale (Radloff, 1977), and detailed health history and activities of daily living questionnaires. Cut-off scores predicting dementia were established for the 3MSE based on data from the Cardiovascular Health Study (CHS) cohort (Lopez et al., 2003b). Individuals scoring less than 80 on the 3MSE were excluded. The remaining participants completed the comprehensive neuropsychological test battery, which included tests within five cognitive domains typically evaluated as part of a work-up for dementia: attention/psychomotor speed; memory for verbal and visual material; language functions; visuospatial/constructional ability; and executive functions including working memory ability (Table 1). The battery also included two tests to estimate premorbid intellectual functioning, the National Adult Reading Test — American Version (Grober & Sliwinski, 1991); and Raven’s Coloured Progressive Matrices (Raven, 1956). Cognitive testing took place in a quiet area without distraction. All technicians administering neuropsychological tests were trained and certified by experienced study personnel.
We established cut-off scores on neuropsychological tests for the purposes of screening out dementia. These were based on CHS test data from cognitively normal participants, and were defined as a score equal to or greater than 1.5 standard deviations below the CHS age- and education-stratified means. The neuropsychological tests were grouped into cognitive domains, with a domain defined as two or more tests assessing a cognitive construct. Participants with scores below cut-off on two or more cognitive domains, with one impaired domain being memory, were excluded on the basis that this level of test performance indicated the presence of dementia. Sixty-four participants who had passed the 3MSE screening (i.e., 3MSE ≥ 80) were excluded in this manner (mean age 79.8 years, mean education 13.8 years).
The screening visit included blood drawn for laboratory screening as well as DNA storage for subsequent apolipoprotein E (ApoE) genotyping. DNA was extracted with a Puregene Kit (QUIAGEN; Valencia, CA) and ApoE genotyping was performed utilizing the TaqMan genotyping assays (Applied Biosystems; Foster City, CA) by Dr. M. Ilyas Kamboh, University of Pittsburgh. Of note, 16 participants refused phlebotomy consent, 425 samples had insufficient volume for DNA extraction and 174 did not have sufficient blood drawn for other reasons, resulting in a subset of 2454 of total baseline participants with ApoE data available.
During the screening visit participants and their proxies were also administered the CDR (Hughes et al., 1982; Morris, 1993), an interviewer-based rating of functional and cognitive abilities. Ratings are given for six domains - memory, orientation, home and hobbies, judgment and problem solving, community affairs and personal care - based on information obtained from both the subject and informant. A scoring algorithm takes into account each category score with an additional weighting for the memory section, yielding a global score from 0 (no dementia) to 3 (severe dementia), with 0.5 representing “questionable dementia.” All GEM study interviewers completed a rigorous CDR training program, including training and certification videotaped interviews. Of the participants’ informants who provided information for the CDR ratings, 59.6% reported living with the participants. Of those informants who did not live with a participant, 78% reported visiting with the participant at least once a week and 72.2% reported phone or e-mail contact with participants at least once a week.
Participants who passed the neuropsychological test battery returned to the clinic for their baseline GEM study visit. At that visit they completed the cognitive portion of the Alzheimer’s Disease Assessment Scale (ADAS-Cog) (Rosen et al., 1984), a symptom and health habits questionnaire, and were randomized to one of the two treatment arms (Ginkgo biloba or placebo). This paper will focus on the results of the cognitive assessments, the CDR interview and self-reported health history and daily functioning completed during the screening and baseline visits.
Following guidelines set out by the International Working Group on MCI (Winblad et al., 2004), we developed data-driven criteria to identify baseline MCI cases in this non-demented cohort: 1) “self and/or informant report of cognitive decline” was operationalized as CDR global score = 0.5; 2) “impairment on objective cognitive tasks” was implemented as a minimum of 2 out of 10 selected neuropsychological test scores (Table 1) impaired at or below the 10th percentile of CHS normative data, stratified by age and education (see below); 3) “preserved basic activities of daily living / minimal impairment in complex instrumental functions” was defined by a CDR global rating < 1. Thus the range of functional change identified as consistent with MCI in this non-demented cohort was defined by CDR global score = 0.5, and within this range of mild cognitive decline in daily life we required evidence of objective impairment on formal testing. However, we also report results of cognitive test impairment regardless of CDR score, to examine the impact of CDR rating on MCI subtype frequencies, and eventually, to examine specific predictors of outcome individually and in combination (i.e., clinical interview data and psychometric test scores).
CHS normative neuropsychological test data were derived from 432 participants clinically adjudicated as having normal cognition based on the same comprehensive neuropsychological battery as the one used in the GEM study (Lopez et al., 2003a; Lopez et al., 2007). CHS adjudication had included review of serial neuropsychological data, proxy report of functional status, psychiatric symptoms, and neurological exam findings. Cut-off scores on neuropsychological tests were originally derived from an independent sample of 250 unimpaired subjects and were set at > 1.5 SDs below the mean of individuals of comparable age and education for CHS adjudication (Lopez et al., 2003a).
These 432 participants with normal cognition were then stratified into 2 age (< 80 vs. ≥ 80 years) × 3 education groups (≤ 12, 13 – 16, and > 16 years) with individual cell n’s ranging from n = 34 (≥ 80 and > 16 years education) to n = 107 (< 80 and ≤ 12 years education). Only one cell had an n < 50. These age- by education-defined strata were well matched in mean age and education to the GEM baseline cohort stratified in the same manner (p’s > .05). Tenth percentile score cut-offs were located within each of the six CHS age-by-education strata for each of the 10 cognitive test variables. This level of mild impairment was selected because it is close to the commonly used cut-off of 1.5 S.D. below the normative mean (corresponding to approximately the 7th percentile on a theoretically normal curve). However, we required a somewhat higher, more sensitive cut-off since this cohort had already been screened for dementia. As well, many of the cognitive test scores were not normally distributed and percentile rank cut-offs are therefore more appropriate (Mitrushina et al., 2005). Tenth percentile cut-offs have been used in other MCI studies (Alladi et al., 2006; Schoonenboom et al., 2005; Solfrizzi et al., 2004).
A minimum of impairment on two out of ten test variables was selected as the basic cognitive criterion, in an attempt to minimize the number of MCI cases that “revert to normal” on follow-up primarily because of spurious psychometric issues (e.g., measurement error, regression to the mean, etc.). Examining the distribution of the number of tests impaired (Figure 1) aided in the decision to choose two tests as a threshold of clinical significance in MCI. For instance, the rate of impairment on one test was relatively common and was not different between CDR 0 and CDR 0.5 participants (28% vs. 27%), suggesting this likely represents the base-rate of test impairment (i.e., diagnostic false positives) observed in healthy older adults (Brooks et al., 2007; Palmer et al., 1998).
Once the basic MCI criteria were fulfilled, mutually exclusive MCI subtypes were defined as follows: a-MCI included all MCI cases with either one (unilateral) or both (bilateral) memory tests impaired, with the remaining cases classified as na-MCI. Single domain MCI, including bilateral amnestic MCI-s, was strictly defined as a full domain impaired (both tests) with no impaired tests outside of that domain (i.e., the number of tests impaired in MCI-s = 2 for all cases). MCI-md was broadly defined as any MCI case that did not qualify for MCI-s. Again, inspection of the distribution of number of domains impaired supported classification decisions: The majority of MCI cases did not have full domains impaired but rather had scattered, cross-domain tests affected (at least two by definition); these cases would be classified as MCI-md by the current algorithm (see Table 2).
Of the total 3,069 participants at baseline, 1832 (59.7%) had global CDR ratings of 0 (sum of boxes range 0 – 1.0), 1,231 participants (40.1%) had global CDR ratings of 0.5 (sum of boxes range 0.5 – 4.5), 2 participants had a CDR rating of 1.0 (but did not meet neuropsychological criteria for dementia at study screening), and 4 were missing CDR data. The 6 participants with either missing CDR ratings or CDR = 1.0 were excluded from the remaining analyses.
The mean age of the baseline GEMS cohort was 78.5 years (SD 3.3) and the mean number of years of education was 14.3 (SD 3.0). The proportion of women was 46.2 % and the proportion of participants who identified themselves as white was 95.5 %. See (DeKosky & al., 2006) for full description of baseline characteristics of the GEMS cohort.
Figure 2 shows the classification tree for MCI and sub-types following application of the algorithm. An overall frequency of 15.7 % of baseline cases were classified as MCI. Amnestic MCI, including both unilateral and bilateral memory impairment cases, was almost twice as frequent as na-MCI (10.0% vs. 5.7%). Full, single domain MCI (both amnestic and non-amnestic) was rare (1.2%), largely because of the restrictive definition used. Of the unilateral amnestic MCI, verbal memory impairment (4.6%) was more frequent than visual memory impairment (2.8%).Visual amnestic MCI cases had a significantly higher mean number of impaired tests (3.4 out of 10) compared to verbal amnestic MCI cases (2.9 out of 10) (p < .01), indicating that they tended to be more impaired overall.
Among the a-MCI cases (n = 305, including bilateral and unilateral a-MCI), the most frequent second full domain impaired was Construction (n = 34, 11.1%), followed by Language (n = 27, 8.8%) and Executive Functions (n = 25, 8.2%). The most frequent non-memory test impaired among a-MCI cases was Rey figure copy (n = 122, 39.9%), followed by animal fluency (n=95, 31.0%), and Stroop interference (n=77, 25.2%).
Because MCI definitions in the literature vary widely, we replicated the classification tree without the initial “filter” of CDR = 0.5 in order to consider a more strictly psychometric definition of MCI. Appendix 1 presents this classification scheme. Inclusion of all cases with CDR 0 and CDR 0.5 results in an overall MCI frequency (28.1 %) almost double the frequency observed when CDR 0.5 was required in the MCI criteria (15.7%). The frequency of a-MCI (including bilateral and unilateral cases) is about 50% more frequent (16.6% vs. 10.0%), and na-MCI is approximately twice as frequent (11.5% vs. 5.7%) when the CDR 0.5 requirement is lifted. Within overall MCI, the distributions of subtypes relative to each other (e.g., single vs. multidomain, amnestic vs. non-amnestic, verbal vs. visual memory impairment) are very similar whether or not CDR 0.5 is applied as a criterion.
Demographic, clinical, cognitive and health-related baseline variables for three MCI subtypes and the cognitively normal group are summarized in Table 3. Compared to the normal group, all MCI groups had scores indicating worse performance on the 3MSE and ADAS, as well as lower scores on the two estimated premorbid intellectual ability tests. In general there were differences across all groups in gender (highest proportion of females among non-amnestic MCI), self-perceived general health, difficulty with any instrumental activity of daily living (IADL), and history of stroke (highest proportion in the unilateral amnestic MCI group). MCI groups tended to be slightly older, tended to have more depressive symptoms (although within a restricted range due to study entry screening), and tended to drink fewer alcoholic drinks per week compared to cognitively normal participants. There were no significant differences among groups in presence of the apoe4 allele. We also tested for a difference in apoe4 allele frequency between all MCI subtypes combined (n = 89, 24.4%) and the normal cognition group (n = 434, 20.9%), which approached, but did not reach, statistical significance (Fisher’s Exact Test, 1-sided, p = .075).
We examined mean memory scores (Table 3) to determine if non-amnestic MCI participants showed memory performance lower than expected, even though these participants did not meet the dichotomous test score cut-off for either memory test. For visual memory but not verbal memory, the non-amnestic MCI group had a significantly lower mean test score than the cognitively normal group.
Finally, we examined whether there were differences across the four GEM sites in the overall frequency of MCI. The proportion of MCI cases was higher at Wake Forest University (WFU) (20.5%) than at the other clinical centers (ranging from 13.3% to 14.9%). WFU also had the highest cognitive ineligibility rate during the screening visit (14.8% compared to 8.7% - 12.4%). Possible reasons for these differences include more limited quality of educational opportunities among this elderly cohort from the South (Manly et al., 2002), and a greater proportion of preclinical cerebrovascular disease in Southern states (El-Saed et al., 2006). However, these factors remain speculative at this time.
The goal of this paper was to characterize the GEM study cohort at baseline with regard to MCI and its sub-types using a data-driven algorithm. Following the guidelines of the International MCI Working Group (Winblad et al., 2004), we required evidence of both cognitive test impairment and caregiver/participant report of decline in implementing the general definition of MCI. Sub-type classification was based upon scores from a comprehensive neuropsychological evaluation covering five domains of cognition. Application of the algorithm resulted in an overall MCI frequency of 15.7%. Although comparison to prevalence rates reported in other studies is difficult due to variable definitions, impairment thresholds, and assessment and classification methods, our 15.7% frequency of overall MCI (including non-amnestic subtypes) in GEM is consistent with other reports from the large community-based studies that examined all subtypes of MCI, such as CHS (19%, (Lopez et al., 2003a), the Leipzig Longitudinal Study of the Aged (15.0%, all MCI, 1.0 SD level; (Busse et al., 2003) and other community-based studies (16.2%, (Zanetti et al., 2006). Our overall MCI frequency reported here is much lower than the estimated prevalence of 28.3% from an ethnically diverse cohort in northern Manhattan (Manly et al., 2005), although it is important to note that those northern Manhattan participants had a mean educational level of 8.2 years, compared to 14.3 years in the GEM study.
The 10.0% frequency of all amnestic MCI observed in the GEM baseline cohort (unilateral plus bilateral amnestic MCI cases) is also difficult to compare to other studies, most of which have focused on the Petersen et al. (Petersen et al., 1999) definition of isolated memory impairment. The frequency of isolated memory impairment presently reported is very low (0.6%) because of the strict definition: two memory tests impaired and no other test impaired, within the context of reported cognitive decline (CDR of 0.5). If cases with isolated, unilateral memory impairment were included (n = 107), however, the total frequency of isolated memory impairment (n= 124, 4.0%) is in line with other studies which did not require both visual and verbal delayed recall deficits (5% CHS, (Lopez et al., 2003a); 2.8 % PAQUID, (Larrieu et al., 2002); 2.9 – 4.0 % MoVIES, (Ganguli et al., 2004); 5% CHSA, (Fisk et al., 2003); 6.4% Northern Manhattan Elders Study, (Manly et al., 2005). An advantage of the present approach is the ability to examine and track the evolution of verbal vs. visual memory impairment separately. Many population studies of MCI use only verbal memory measures (as do many clinical evaluations by practitioners). Our data indicate that almost half of all MCI cases have one modality of memory tests impaired but not the other, while within this sub-group, eighty-five individuals (37%) were impaired on the visual memory test only. The visual amnestic MCI cases also tended to be more impaired than the verbal amnestic MCI cases, with a higher mean number of tests impaired. Certainly, the bilateral amnestic MCI cases (16% of all MCI) are predicted to be at highest risk for progression to AD. To what extent assessing both modalities of memory increases predictive validity for future cognitive decline can be determined on follow-up analyses.
Within the amnestic MCI group, impairments on non-memory tests were relatively common and distributed fairly evenly among tests of visual-spatial construction, language and executive functions. This finding is consistent with Bäckman et al. (Backman et al., 2005) meta-analysis of cognitive deficits in preclinical AD reporting sizeable deficits in multiple cognitive domains in addition to episodic memory. Again, future studies from this cohort will examine which cognitive measures in addition to memory will be most predictive of progression to dementia.
The relative contributions of the informant-based clinical interview (CDR) and formal psychometric testing, and the combination of the two approaches as illustrated here in the basic definition of MCI, will also be determined on follow up. Forty percent of GEM study participants at baseline were rated with a global CDR score of 0.5, indicating “questionable dementia.” While this number appears surprisingly high, Meguro et al. (Meguro et al., 2004) reported an overall prevalence of CDR 0.5 to be 30.2% in a large Japanese community-dwelling cohort. These rates raise the question of the ‘transferability’ of the CDR scale, originally developed for use in specialty memory disorder clinics, to community studies with clearly different selection and population characteristics, even when staff are trained with the standardized methods utilized in the specialty clinics. The meaning, interpretation and predictive validity of CDR 0.5 in community/population studies will be compared to clinic-based studies in follow-up. Notably, the CDR and neuropsychological test results are concordant to a large degree (e.g., Table 3; Figure 1); however, there is also considerable independence, e.g., a full 49% of CDR 0 participants had one or more cognitive tests impaired, and a sizeable proportion (9%) had at least three tests impaired. Furthermore, 34% of CDR 0.5 participants had no tests impaired. Clearly these two assessment approaches are non-overlapping and provide divergent kinds of information about cognitive decline.
An important limitation of the present study which may confine the generalizability of the findings is the demographic nature of the cohort, that being mostly Caucasian, healthy, and relatively highly educated. Willingness to participate and be randomized in a long-term clinical trial is a selection factor that makes this cohort less representative. However, the differences will be instructive, as effective implementation of large-scale dementia prevention trials depends upon comprehensive understanding of typical participant characteristics that will have an impact on outcome, e.g., cognitive test performance, informant-based reports of cognitive/daily functional decline, etc. It is also noteworthy that less than eight percent of participants had initially responded to study recruitment media advertisements, motivated perhaps by memory or other cognitive concerns. This subset of participants may be significant enough in number to upwardly bias the reported frequency of MCI, yet it is not nearly as high as in most clinical trials in which the majority of participants are self-selected. Nevertheless, it is the case that recruitment and selection characteristics of the GEM study cohort are likely a unique hybrid of representative community sampling and clinical trial selection bias.
Another issue to consider is the nature of MCI case definition, which is retrospective and algorithm-based, rather than prospective and with case-by-case clinical adjudication, two factors strongly advocated by Petersen (Petersen & Knopman, 2006) and others. One of the aims of our developed algorithm was to take a conservative approach and maximize specificity, perhaps at the expense of sensitivity, in order to minimize the rate of “reversion to normal” observed with high frequency in numerous population studies (Ganguli et al., 2004; Larrieu et al., 2002; Palmer et al., 2002; Ritchie et al., 2001; Visser et al., 2002). To achieve this aim, we required evidence of at least two tests impaired, as well as evidence of informant report of cognitive decline, for MCI case definition, a more restrictive definition than many population studies which have shown instability of the MCI phenotype. Follow-up analyses will determine the utility of this approach.
Alternate classification scheme of GEMS study baseline cohort for MCI and composite MCI cognitive sub-types, without regard to CDR score. Percentages in diamonds represent frequencies of MCI sub-types referenced to entire cohort with valid CDR 0 or CDR 0.5, N = 3063.
Richard L. Nahin, PhD, MPH, Barbara C. Sorkin, PhD, National Center for Complementary and Alternative Medicine
Linda Fried, MD, MPH, Michelle Carlson, PhD, Pat Crowley, MS, Claudia Kawas, MD, Paulo Chaves, MD, Joyce Chabot, John Hopkins University; John Robbins, MD, MHS, Katherine Gundling, MD, Sharene Theroux, CCRP, Lisa Pastore, CCRP, University of California-Davis; Lewis Kuller, MD, DrPH, Roberta Moyer, CMA, Cheryl Albig, MA, University of Pittsburgh; Gregory Burke, MD, Steve Rapp, PhD, Dee Posey, Margie Lamb, RN, Wake Forest University School of Medicine
Robert Hörr, MD, Joachim Herrmann, PhD.
Richard A. Kronmal, PhD, Annette L. Fitzpatrick, PhD, Fumei Lin, PhD, Cam Solomon, PhD, Alice Arnold, PhD, University of Washington
Steven DeKosky, MD, Judith Saxton, PhD, Oscar Lopez, MD, Beth Snitz PhD, M. Ilyas Kamboh PhD, Diane Ives, MPH, Leslie Dunn, MPH, University of Pittsburgh
Curt Furberg, MD, PhD, Jeff Williamson, MD, MHS; Nancy Woolard, Kathryn Bender, Pharm.D., Susan Margitić, MS, Wake Forest University School of Medicine
Russell Tracy, PhD, Elaine Cornell, UVM, University of Vermont
William Rothfus MD, Charles Lee MD, Rose Jarosz, University of Pittsburgh
Richard Grimm, MD, PhD (Chair), University of Minnesota; Jonathan Berman, MD, PhD (Executive Secretary), National Center for Complementary and Alternative Medicine; Hannah Bradford, M.Ac., L.Ac., MBA, Carlo Calabrese, ND MPH, Bastyr University Research Institute; Rick Chappell, PhD, University of Wisconsin Medical School; Kathryn Connor, MD, Duke University Medical Center; Gail Geller, ScD, Johns Hopkins Medical Institute; Boris Iglewicz, Ph.D, Temple University; Richard S. Panush, MD, Department of Medicine Saint Barnabas Medical Center; Richard Shader, PhD, Tufts University.