PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jnnpsycJournal of Neurology, Neurosurgery and PsychiatryVisit this articleSubmit a manuscriptReceive email alertsContact usBMJ
 
J Neurol Neurosurg Psychiatry. 2007 August; 78(8): 790–799.
Published online 2006 December 18. doi:  10.1136/jnnp.2006.095414
PMCID: PMC2117747

A review of screening tests for cognitive impairment

Abstract

The merit of screening for dementia and cognitive impairment has been the subject of recent debate. One of the main limitations in this regard is the lack of robust evidence to support the many screening tests available. Although plentiful in number, few such instruments have been well validated in the populations for which they are intended to be used. In addition, it is likely that “one size does not fit all” in cognitive screening, leading to the development of many specialised tests for particular types of impairment. In this review, we sought to ascertain the number of screening tools currently available, and to examine the evidence for their validity in detecting different diagnoses in a variety of populations. A further consideration was whether each screen elicited indices of a range of cognitive, affective and functional domains or abilities, as such information is a valuable adjunct to simple cut‐off scores. Thirty‐nine screens were identified and discussed with reference to three purposes: brief assessment in the doctor's office; large scale community screening programmes; and identifying profiles of impairment across different cognitive, psychiatric and functional domains/abilities, to guide differential diagnosis and further assessment. A small number of screens rated highly for both validity and content. This review is intended to serve as an evaluative resource, to guide clinicians and researchers in choosing among the wide range of screens which are currently available.

As our population grows older, the issue of screening for dementia and cognitive impairment will become increasingly important. It is now well accepted that the incidence of dementia is on the rise (eg, it has been forecast that the annual number of new cases of Alzheimer's disease (AD) in the US will double by the year 2050).1 Improvements in survival rates following stroke mean that we will see an increase in vascular and post‐stroke dementias: approximately 30% of stroke patients go on to develop a progressive dementia.2,3 At present, a typical UK primary care general practitioner (GP) with a list of 2000 patients sees one or two new cases of dementia each year, and has 10 or 12 existing cases.4 Significant increases in these numbers in the coming decades will present a challenge to GPs, many of whom have difficulty diagnosing dementia at current levels.5 A review by the US Preventive Services Task Force6 reported that between 50% and 66% of patients found to have dementia in primary care samples had no such diagnosis in their medical notes. The authors suggested that “new screening in primary care practice could therefore potentially double the number of patients who receive a diagnosis of dementia”. However, the authors did not advocate global screening for dementia, because of potential adverse factors such as distress caused by false positive results, a paucity of efficacious treatment options and a lack of evidence that early detection significantly improves outcomes for patients. On the other hand, any benefit that can be gained from AD treatment medications is usually most apparent in the earlier stages, and early detection also allows for more careful planning of financial and support systems when the patient is in a position to make their wishes known.7 A consensus is emerging that (in primary care settings at least) screening should be applied to patients aged over 75 years, and to younger patients when there is reason to suspect cognitive impairment.8

A key point in the screening debate is the suitability of currently available screening instruments: few screens have been validated in the populations for which they are intended to be used, many have low accuracy for mild levels of impairment, and there are often demographic biases in score distributions. Although “no single instrument for cognitive screening is suitable for global use”,8 clinician surveys indicate that the Mini‐Mental State Examination (MMSE)9 is overwhelmingly ubiquitous in practice.10 Boustani et al6 recommend further research into alternative brief screening tools before routine screening can be advocated unreservedly.

It could be said that the basic purpose of cognitive screening tests is to indicate likelihood of genuine cognitive impairment, inferred from the relationship of the patient's score to reference norms. A very impaired score (along with supporting history) may lead a physician to make a diagnosis without further investigation; a borderline score may prompt referral for specialist assessment (eg, at a memory clinic), where available. The success of a particular screening tool for this purpose will lie in its statistical robustness—ideally, high sensitivity and specificity along with a high positive predictive value in a population with a relevant base rate of impairment. Sensitivity refers to the proportion of people who have an impairment who are classified by the screen as impaired; specificity refers to the proportion of people who do not have an impairment who are classified by the screen as unimpaired; positive predictive value refers to the proportion of people who are classified by the screen as impaired who really are impaired (this statistic is not always reported in validation papers). Time pressure in the clinical consultation means that this robustness must be achievable in the minimum time possible, using an instrument which is easy to administer. This imperative has led to the development of extremely brief one or two task screens, with an emphasis on predictive performance, often in narrowly delineated patient groups.

While the benefits of the statistical probability–calculation approach are clear (maximising detection while minimising unnecessary investigation of healthy people), several drawbacks are also apparent. Many screening tests overemphasise memory dysfunction, the hallmark of AD, to the neglect of other domains such as language, praxis or executive function, which may be the earliest features of vascular or other non‐Alzheimer dementias.11 Indeed, a review by the American Academy of Neurology12 recommended that memory dysfunction should not be a required part of the diagnosis of dementia. A recent debate in the literature has focused on what has been termed the “alzheimerisation” of dementia,13 and the influence of this on screening tests may mean that important signs of other types of cognitive impairment are missed. A second problem is the emphasis on cut‐off scores rather than profiles of impairment: with some exceptions, most screens produce a single score which is then compared with a standard cut‐point. This runs counter to the preferred practice of most clinicians, who tend to arrive at a diagnosis by an iterative process of creating, rejecting and refining hypotheses over a period of time.14 This would be better served by a symptom oriented approach to assessment,15 where the qualitative information elicited by a screen was at least as important as a numeric score. Screens which elicit information about a wide range of domains (cognitive, functional and affective) would also find use in many settings apart from the doctor's surgery; neuropsychologists, in particular, also use screens, but more to guide further assessment than to examine statistical risk of a diagnosis. The ideal screen would be both statistically robust and qualitatively rich, allowing referring clinicians to better describe a patient's symptom profile, and lending itself to use in a wider range of settings.

While the cognitive screen is not intended to be a substitute for a full neuropsychological assessment (and each has a complementary but distinct role), it should still be possible to obtain indices of key cognitive domains in a brief consultation. Neuropsychological testing has consistently shown that subtypes of dementia are characterised by different patterns of impairment. AD is characterised by impairment of episodic memory (verbal and non‐verbal) at the initial stage, followed by dysfunction in judgement and abstract reasoning, visual construction, verbal fluency and naming.16 Patients with vascular dementia tend to be significantly more impaired than patients with AD on tests of executive function such as verbal fluency, and their level of memory impairment is usually less severe.17 In frontotemporal dementia, letter fluency and executive function are usually worse than in AD, while memory performance is usually better.18 Lewy body dementia is characterised by dysfunction in attention, visuospatial tasks, letter fluency, mental tracking and abstract reasoning.19 Sensitivity to all types of dementia would undoubtedly increase if a screening instrument covered the cognitive domains known to be impaired in the various types of dementia, rather than having a restricted focus on memory impairment. Based on established neuropsychological profiles in different dementias, there are six core domains or abilities that should be covered by a comprehensive screening instrument: attention/working memory, new verbal learning and recall, expressive language, visual construction, executive function and abstract reasoning.

The aims of this review were to identify currently available cognitive screening tests and consider their suitability for three main purposes:

  1. brief assessment in the doctor's office—tests which have a short administration time, are statistically robust in a wide range of unselected samples and for different diagnoses, and which cover the six key neuropsychological abilities;
  2. large scale screening programmes in the community—tests which are statistically robust in a wide range of unselected samples and for different types of diagnoses, and can be administered indirectly (by telephone, by post or via an informant);
  3. domain specific screening to guide further assessment—tests which elicit direct evidence of the patient's ability on a wide range of cognitive, functional and psychiatric domains/abilities.

This paper is intended to serve as both an information resource about available screens, as well as an evaluative critique of those screens for the purposes described above.

Methods

Literature search

Screens were identified by searching electronic databases (Entrez‐PubMed, CINAHL, PsycINFO and IngentaConnect), using combinations of the following terms: “dementia”, “Alzheimer”, “cognitive impairment”, “post stroke”, “screen”, “primary care” and “community”. Individual test names were also used as search terms. In addition, the reference lists of papers yielded were manually searched.

Selection criteria

Screening tests were included if they were designed to screen for cognitive impairment or had been used for that purpose, had an administration time of less than 20 min and were available in English. Screens could be administered directly to patients, or be partly or fully informant rated. Individual papers relating to each screen were included if they: were the original paper presenting the content of the screen; presented data relating to the screening aspects of the test (as opposed to non‐relevant aspects such as factor structure); presented data relating to the performance of the test as it stands alone (ie, validity statistics based on scores from combined sources (screen test plus functional status, for example) were not considered). Validity studies must also have employed acceptable “gold standard” diagnostic criteria (ie, based on international diagnostic guidelines or clinical judgement following a full assessment battery); the use of another screening test as the gold standard was not acceptable. Further criteria were applied when selecting screens for inclusion in intablestables 2 and 33 below, and are detailed in the relevant parts of the results section below.

Table thumbnail
Table 2 Selected tests for use as brief assessment tools in the doctor's office
Table thumbnail
Table 3 Selected tests for use in large scale community screening

Data extraction

Data were extracted for each screen by one author (BC), according to a semi‐structured pro‐forma encompassing reliability statistics, sample types, validity statistics by type of diagnosis and pertinent comments or criticisms contained in individual papers. A list of cognitive, psychiatric and functional domains/abilities covered by each test was made independently by two of the authors (BC and BO'N), and a final list was agreed upon by consensus, with a third author (JJE) consulted if necessary.

Results

Screening tests identified

Thirty‐nine tests were identified which met selection criteria. A further three tests did not meet the criteria: the Community Screening Interview for Dementia (CSI ‘D')20 and the Cambridge Cognitive Examination (CAMCOG),21 as each takes more than 20 min to administer, and the Mental Alternation Test,22 as it has not been standardised against an acceptable gold standard. Table 11 presents the names, source references, administration times and reliability coefficients (where published) of the 39 tests included. Of individual published papers pertaining to the 39 screens examined, a total of eight were excluded because of use of an inappropriate gold standard, or because other information was added to the screen score when calculating validity statistics, or because they did not directly examine screen validity. The remainder are referenced in intablestables 2 and 33 below.

Table thumbnail
Table 1 List of screening tests reviewed (alphabetically by abbreviation)

Which test for which purpose?

Brief assessment in the doctor's office

Table 22 shows how a selected subset of the tests performed on the key characteristics deemed important for this purpose (ie, good sensitivity/specificity balance, validation in samples from varied sources (especially primary care or community) with varied illness aetiologies and brevity). Coverage of the six key cognitive abilities is also detailed. Two of the six identified key cognitive abilities were attention/working memory and executive function; these were represented by digit span/other mental tracking and verbal fluency, respectively. Tests which are wholly informant rated were excluded from table 22,, as proxy rating is often not feasible or optimal in the medical consultation setting, because of the absence of an informant, concerns about confidentiality on the part of the patient or inability of informants to give a reliable history. Telephone administered screens were also excluded. Of the remainder, tests were selected for inclusion in table 22 if their reported sensitivity and specificity were high (above 85%) for all dementia types together or for more than one particular subtype alone, and/or they covered at least three key domains. Cross comparison of the validity and sample source columns gives an indication of the varied performance of screens in different samples.

As table 2 shows, the 3MS and the CASI both perform well in this context, eliciting information about key cognitive abilities, with robust validity in non‐selected samples. The MMSE is probably the most widely utilised screen, although it does not cover all key abilities. The ACE‐R and DemTect are included in table 22 as potentially useful in community/primary care samples, although they have not yet been validated in this way.

Criticisms have been noted in the literature regarding the characteristics of some of these tests (eg, age and/or education biases (MMSE6 GPCOG75), low specificity in some unselected samples (CASI)76 wide variation in scores without any accompanying clinical change (3MS)25 and low positive predictive value (AMT)).77

Large scale screening in the community

Table 33 shows tests suitable for this purpose (ie, those which can be administered indirectly (eg, by telephone or by informant proxy) and which have a good sensitivity/specificity balance (>85%) for all dementia types together or for more than one particular subtype alone). Some of the shorter tests described in table 22 could also be considered for this purpose, if the screening campaign is to be conducted directly (eg, the CAST is a self‐report paper and pencil test and could be administered by post).

Shortcomings are evident for some of these tests, or the ways in which they have been validated (eg, variable test–retest reliability across individual items (IQCODE‐SF)48; gold standard diagnosis based on case note review (MCAS)49; and non‐comparable assessment procedures for patients (informant rated) and controls (self‐rated; SMQ)).60

Domain specific screening to guide further assessment

TablesTables 4 and 55 describe the content of all tests according to a comprehensive checklist of abilities/domains assessed. Wholly informant rated tests and those administered by telephone are not included in these tables, as they are unlikely to be used alone to guide subsequent direct testing of a patient. Two of the tests reviewed (CDT and VFC) are listed as cognitive abilities in and of themselves, due to the range of impairments that might contribute to poor performance, and a lack of clear consensus on the sub‐abilities tapped. The “number transcoding” task has also been treated as an ability in itself, for the same reasons. The ability named “verbal fluency” refers specifically to timed assessment of word fluency for letters or categories, rather than the broader definition sometimes used in the literature, which may refer to conversational fluency, etc.

Table thumbnail
Table 4 Cognitive, psychiatric and functional abilities/domains covered in each test—the key domains
Table thumbnail
Table 5 Cognitive, psychiatric and functional abilities/domains covered in each test, excluding the key domains

A test was considered to cover a named ability if a subscore was assigned to an item which assessed that ability (eg, while “receptive language” is certainly an element of any verbally administered test, it was only marked as present on the table if it independently received a score which contributed to the total). Direct assessment was also required (ie, testing of verbal recall by presentation of new information in the assessment session rather than by indirect self‐report). The six key abilities discussed above are shown in table 44;; these are encompassed in the nine columns in table 44 (digit span or other mental tracking; verbal fluency; reasoning/judgment; expressive language; visual construction; immediate free verbal recall or delayed free verbal recall or cued verbal recall). Recognition memory is not included in the key abilities as it may be intact in dementia, despite impairment of free recall. It should also be noted that the length of the delay in delayed recall tasks varies across screens. However, all delayed recall responses are elicited after an intervening period of at least a couple of minutes, during which an unrelated task is performed, precluding rehearsal. It seems likely that longer delays would be more sensitive to impairment, although it is unclear at what point in time information passes from the episodic buffer to long term memory.109

The 3MS and CASI are the only tests which cover all six key abilities. Where tests cover four or five of the six abilities (ACE‐R, SASSI, MMSE, NCSE, STMS), reasoning/judgment and verbal fluency are most frequently absent. Memory is not directly assessed by several tests, even those covering a number of other abilities (CAST, SPMSQ). The CAST does, however, elicit indirect information about memory problems in its self‐report questionnaire section. Conversely, other screens test memory and no other ability (3WR, HVLT, MIS); these are included in the present review because they have been used in the literature as screening tools, yet they would clearly have less utility in guiding other specific areas for further, broad ranging cognitive testing.

Other specific uses

Other screens were included in this review but fit less easily with the three main uses described above. Some tests were designed for very specific purposes (eg, the R‐CAMCOG for screening post‐CVA cognitive impairment, and the ABCS for mild cognitive impairment). These tests have been developed to overcome the limitations of existing screens in particular patient groups, and reflect the point that “one size fits all” screens can be of limited clinical utility in specialist settings.

Discussion

The aims of this paper were to identify and evaluate available screening instruments for cognitive impairment. Screens have been presented in tables according to different purposes, forming a quick reference resource to assist clinicians and researchers in making choices. We have also evaluated screens according to the three main purposes outlined, and have drawn attention to criticisms in the literature.

The first purpose for which we considered the screens was as brief assessment tools in the clinical setting, particularly in primary care. This is probably the most common way in which screens are used, and is the focus of policy consensus statements,6,7,8 which have highlighted the dearth of evidence in favour of routine screening. It is clear from the present review that few of the 39 tests identified have been validated in the types of unselected primary care or community based samples which would be representative of target populations for screening efforts. It is interesting that the screens which rate highest with regard to validation methods and statistics, as well as coverage of key cognitive abilities, are those which expand on the content of the MMSE and from which an MMSE score can be derived (3MS, CASI, SASSI). The ACE‐R also expands on the MMSE but has yet to be validated in non‐specialist settings. It is likely that these screens will prove easily acceptable to clinicians already familiar with the MMSE. There does not appear to be a direct relationship between number of key cognitive abilities covered and the validity statistics; however, the usefulness of having broader coverage lies more in the qualitative information it adds to the basic score. Despite an understandable drive towards ultra‐brief tests which can be used in a typically time constrained GP consultation, an administration time of more than 10 min appears to be an unavoidable cost of achieving sufficiently robust statistical performance while covering key domains.

The second purpose considered was large scale community screening programmes. Informant rated scales, or assessments of patients which can be carried out by telephone or post, formed the main focus of this section. However, some community screening initiatives (eg, memory awareness days in clinics or community centres) could be conducted face to face using the shorter of the instruments detailed in table 22.. Of the informant scales, the IQCODE (in its original and abbreviated versions) is the most widely used, although it has variable performance across reported studies. The SMQ shows promise as a brief and accurate screen, meriting further study.

Coverage of various cognitive, psychiatric and functional abilities/domains was examined for all 39 screens. Tests varied in coverage from single domain tasks to wide ranging mini‐batteries. Clearly, if the clinician's aim is to elicit useful qualitative and quantitative information about the profile of a patient's presenting symptoms, then wider ranging screens will be of greater value. Secondary or tertiary care clinicians (working in psychiatric, neuropsychological or neurological settings, for example) are likely to be more concerned with differential diagnosis or with further investigation of mild or unusual presentations, situations in which clinical judgement will take precedence over composite scores and cut‐points; scales with broader coverage will therefore be sought in preference to brief assessment tools. It is possible to achieve a balance between these different uses, however, as with scales such as the 3MS, CASI, SASSI and ACE‐R.

An important point to note is that although a number of screens have been validated in particular subtypes of dementia, this does not mean that they are necessarily useful for differential diagnosis. Most sensitivity and specificity statistics for the various subtypes of dementia were calculated against normal controls, rather than other types of dementia. This means that a screen which is particularly good at picking up AD, for example, will not in fact be useful clinically unless it is also good at picking up non‐AD impairments. An effective screen is one which can firstly identify impairment of any aetiology, and secondly provide an indication as to the most likely aetiology in a particular case. For the former aim, it matters most that a screen has demonstrated good validity in samples of mixed aetiology to detect any type of impairment (ie, the “all dementia” column in intablestables 2 and 33);); for the latter, it matters not that a screen can distinguish AD from normal controls (for example), but that it can distinguish AD from non‐AD aetiologies. The ACE‐R is notable for having been specifically validated with differential diagnosis in mind: the patient's individual profile across cognitive domains can be used to estimate the likelihood that their impairment is due to AD versus frontotemporal dementia, providing a valuable adjunct to their simple overall score. This further underscores the importance of covering a wide range of cognitive abilities when designing a screen (and, as mentioned in the introduction, fits better with the preferred working methods of most clinicians). Until other screens are also examined for effectiveness in distinguishing between different aetiologies, anything other than the “all dementia” calculations is clinically redundant.

If one considers the commonest application of screening (ie, brief direct assessment of patients, with the aim of firstly identifying any impairment and secondly providing an indication of the cause of that impairment), then the screens which are likely to be most useful are those which have good sensitivity and specificity for all dementia types in unselected populations, and which elicit information about key cognitive abilities, which can then be compared with neuropsychological profiles in different types of dementia. Table 22 shows that the most promising candidates are the 3MS, CASI, MMSE, SASSI, STMS and ACE‐R. The STMS is notably shorter than the others and so may appeal to the most time pressed clinicians. The 3MS and CASI are the only screens which have been validated in community samples and which cover all the key cognitive abilities, and so are good candidates for those with more time available (although note the shortcomings mentioned in the text accompanying table 22 above). The ACE‐R has not yet been validated in community samples, but its focus on differential diagnosis profiles may be particularly useful for clinicians in secondary/tertiary practice, to guide further investigations.

The specific criticisms described in the results section regarding some of the screens are indicative of common shortcomings in test validation research. Few screens have been validated in unselected samples, and those that have are frequently subject to differential gold standard procedures for patient and control groups. It is rare for all participants who screen negative in large community samples to undergo the same type of confirmatory assessment as those with positive screens. This leads to verification bias, whereby sensitivity calculations are overestimated and specificity underestimated.110 Applicability to real life situations is further compromised by restrictive sample recruitment criteria which often exclude those with a history of substance use, neurological and psychiatric disorder, head injury and other common comorbidity. In addition, as table 11 shows, many authors have not published reliability statistics for their screens. Adequate reliability (internal, test–retest and inter‐rater) is a prerequisite for robust validity, and should be evaluated and reported routinely. These factors should be borne in mind when evaluating all of the screens described here.

In our endeavour to present a comprehensive overview of as many screens as possible, it was not feasible to conduct a fully rigorous quality rating of each study from which we extracted the data presented here. We have, however, applied inclusion criteria as described in the methods section, and have noted critical points regarding certain screens and studies. This review is intended to serve as a resource and starting point from which interested readers can further investigate particular screens for their own requirements.

In consideration of the various purposes for which cognitive impairment screens can be used, it is almost certainly futile to attempt to develop screens that fit all needs. Out of 39 screens identified, we have emphasised a small subset that, in our opinion, have particular strengths, but ultimately there is no such thing as the perfect screen for all purposes. Clinicians should move away from the tendency to become over reliant on one screen (usually the MMSE), and take advantage of the continually evolving (and dauntingly extensive) range of more specialised tools for different situations. This task would be made easier if researchers were to focus on refining and adapting existing screens, with closer consideration of the theoretical basis of symptom profiles in different diagnoses, and specific examination of differential diagnosis within impaired samples. Regardless of policy positions on the merits or otherwise of routine cognitive screening, there is a wealth of potential benefit in the thoughtful application of existing screens in clinical practice.

Abbreviations

AD - Alzheimer's disease

CAMCOG - Cambridge Cognitive Examination

GP - general practitioner

MMSE - Mini‐Mental State Examination

Footnotes

Competing interests: None.

References

1. Hebert L E, Beckett L A, Scherr P A. et al Annual incidence of Alzheimer disease in the United States projected to the years 2000 through 2050. Alzheimer Dis Assoc Disord 2001. 15169–173.173 [PubMed]
2. Henon H, Durieu I, Guerouaou D. et al Poststroke dementia: incidence and relationship to prestroke cognitive decline. Neurology 2001. 571216–1222.1222 [PubMed]
3. Pohjasvaara T, Erkinjuntti T, Ylikoski R. et al Clinical determinants of poststroke dementia. Stroke 1998. 2975–81.81 [PubMed]
4. Iliffe S, Wilcock J, Austin T. et al Dementia diagnosis and management in primary care: Developing and testing educational models. Dementia 2002. 110–23.23
5. Barrett J J, Haley W E, Powers R E. Alzheimer's disease patients and their caregivers: medical care issues for the primary care physician. South Med J 1996. 891–9.9 [PubMed]
6. Boustani M, Peterson B, Hanson L. et al Screening for dementia in primary care: a summary of the evidence for the U.S. Preventive Services Task Force. Ann Intern Med 2003. 138927–937.937 [PubMed]
7. Solomon P R, Murphy C A. Should we screen for Alzheimer's disease? A review of the evidence for and against screening Alzheimer's disease in primary care practice. Geriatrics 2005. 6026–31.31 [PubMed]
8. Brodaty H, Clarke J, Ganguli M. et al Screening for cognitive impairment in general practice: toward a consensus. Alzheimer Dis Assoc Disord 1998. 121–13.13 [PubMed]
9. Folstein M F, Folstein S E, McHugh P R. “Mini‐mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res 1975. 12189–198.198 [PubMed]
10. Shulman K I, Herrmann N, Brodaty H. et al IPA survey of brief cognitive screening instruments. Int Psychogeriatr 2006. 18281–294.294 [PubMed]
11. De Koning I, Van Kooten F, Koudstaal P J. Value of screening instruments in the diagnosis of post‐stroke dementia. Haemostasis 1998. 28158–166.166 [PubMed]
12. Knopman D S, DeKosky S T, Cummings J L. et al Practice parameter: diagnosis of dementia (an evidence‐based review). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 2001. 561143–1153.1153 [PubMed]
13. Royall D. The “Alzheimerization” of dementia research. J Am Geriatr Soc 2003. 51277–278.278 [PubMed]
14. De Lepeleire J, Heyrman J. Diagnosis and management of dementia in primary care at an early stage: the need for a new concept and an adapted procedure. Theor Med Bioeth 1999. 20215–228.228 [PubMed]
15. Kipps C M, Hodges J R. Cognitive assessment for clinicians. J Neurol Neurosurg Psychiatry 2005. 76(Suppl 1)i22–i30.i30 [PMC free article] [PubMed]
16. Zec R F. Neuropsychological functioning in Alzheimer's disease. In: Parks RW, Zec RF, Wilson RS, eds. Neuropsychology of Alzheimer's disease and other dementias. Oxford: Oxford University Press, 1993. 3–80.80
17. Padovani A, Di Piero V, Bragoni M. et al Patterns of neuropsychological impairment in mild dementia: a comparison between Alzheimer's disease and multi‐infarct dementia. Acta Neurol Scand 1995. 92433–442.442 [PubMed]
18. Pachana N A, Boone K B, Miller B L. et al Comparison of neuropsychological functioning in Alzheimer's disease and frontotemporal dementia. J Int Neuropsychol Soc 1996. 2505–510.510 [PubMed]
19. Hansen L, Salmon D, Galasko D. et al The Lewy body variant of Alzheimer's disease: a clinical and pathologic entity. Neurology 1990. 401–8.8 [PubMed]
20. Hall K S, Gao S, Emsley C. et al Community Screening Interview for Dementia (CSI ‘D'): Performance in five disparate study sites. Int J Geriatr Psychiatry 2000. 15521–531.531 [PubMed]
21. Roth M, Huppert F A, Tym E. et alCAMDEX. Cambridge: Cambridge University Press, 1988
22. Jones B N, Teng E L, Folstein M F. et al A new bedside test of cognition for patients with HIV infection. Ann Intern Med 1993. 1191001–1004.1004 [PubMed]
23. Teng E L, Chui H C. The Modified Mini‐Mental State (3MS) examination. J Clin Psychiatry 1987. 48314–318.318 [PubMed]
24. McDowell I, Kristjansson B, Hill G B. et al Community screening for dementia: the Mini Mental State Exam (MMSE) and Modified Mini‐Mental State Exam (3MS) compared. J Clin Epidemiol 1997. 50377–383.383 [PubMed]
25. Correa J A, Perrault A, Wolfson C. Reliable individual change scores on the 3MS in older persons with dementia: results from the Canadian Study of Health and Aging. Int Psychogeriatr 2001. 13(Suppl 1)71–78.78 [PubMed]
26. Kuslansky G, Buschke H, Katz M. et al Screening for Alzheimer's disease: the memory impairment screen versus the conventional three‐word memory test. J Am Geriatr Soc 2002. 501086–1091.1091 [PubMed]
27. Solomon P R, Hirschoff A, Kelly B. et al A 7 minute neurocognitive screening battery highly sensitive to Alzheimer's disease. Arch Neurol 1998. 55349–355.355 [PubMed]
28. Molloy D W, Standish T I, Lewis D L. Screening for mild cognitive impairment: comparing the SMMSE and the ABCS. Can J Psychiatry 2005. 5052–58.58 [PubMed]
29. Mioshi E, Dawson K, Mitchell J. et al The Addenbrooke's Cognitive Examination Revised : a brief cognitive test battery for dementia screening. Int J Geriatr Psychiatry 2006. 211078–1085.1085 [PubMed]
30. Hodkinson H M. Evaluation of a mental test score for assessment of mental impairment in the elderly. Age Ageing 1972. 1233–238.238 [PubMed]
31. Mendiondo M S, Ashford J W, Kryscio R J. et al Designing a Brief Alzheimer Screen (BAS). J Alzheimer Dis 2003. 5391–398.398
32. Krishnan L R, Levy R M, Wagner H R. et al Informant‐rated cognitive symptoms in normal aging, mild cognitive impairment and dementia. Initial development of an informant‐rated screen (Brief Cognitive Scale) for mild cognitive impairment and dementia. Psychopharmacol Bull 2001. 3579–88.88 [PubMed]
33. Teng E L, Hasegawa K, Homma A. et al The Cognitive Abilities Screening Instrument (CASI): a practical test for cross‐cultural epidemiological studies of dementia. Int Psychogeriatr 1994. 645–58.58 [PubMed]
34. Drachman D A, Swearer J M, Kane K. et al The cognitive assessment screening test (CAST) for dementia. J Geriatr Psychiatry Neurol 1996. 9200–208.208 [PubMed]
35. Swearer J M, Drachman D A, Li L. et al Screening for dementia in “real world” settings: the cognitive assessment screening test: CAST. Clin Neuropsychol 2002. 16128–135.135 [PubMed]
36. Kaufman D M, Weinberger M, Strain J J. et al Detection of cognitive deficits by a brief mental status examination: the Cognitive Capacity Screening Examination, a reappraisal and a review. Gen Hosp Psychiatry 1979. 1247–255.255 [PubMed]
37. Sunderland T, Hill J L, Mellow A M. et al Clock drawing in Alzheimer's disease. A novel measure of dementia severity. J Am Geriatr Soc 1989. 37725–729.729 [PubMed]
38. Lin K N, Wang P N, Chen C. et al The three‐item clock‐drawing test: a simplified screening test for Alzheimer's disease. Eur Neurol 2003. 4953–58.58 [PubMed]
39. Ritchie K, Fuhrer R. The validation of an informant screening test for irreversible cognitive decline in the elderly: performance characteristics within a general population sample. Int J Geriatr Psychiatry 1996. 11149–156.156
40. Kalbe E, Kessler J, Calabrese P. et al DemTect: a new, sensitive cognitive screening test to support the diagnosis of mild cognitive impairment and early dementia. Int J Geriatr Psychiatry 2004. 19136–143.143 [PubMed]
41. Silverman J M, Breitner J C, Mohs R C. et al Reliability of the family history method in genetic studies of Alzheimer's disease and related dementias. Am J Psychiatry 1986. 1431279–1282.1282 [PubMed]
42. Ellis R J, Jan K, Kawas C. et al Diagnostic validity of the dementia questionnaire for Alzheimer disease. Arch Neurol 1998. 55360–365.365 [PubMed]
43. Kawas C, Segal J, Stewart W F. et al A validation study of the Dementia Questionnaire. Arch Neurol 1994. 51901–906.906 [PubMed]
44. Brodaty H, Pond D, Kemp N M. et al The GPCOG: a new screening test for dementia designed for general practice. J Am Geriatr Soc 2002. 50530–534.534 [PubMed]
45. Brandt J. The Hopkins Verbal Learning Test: Development of a new memory test with six equivalent forms. Clin Neuropsychol 1991. 5125–142.142
46. Jorm A F, Jacomb P A. The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): socio‐demographic correlates, reliability, validity and some norms. Psychol Med 1989. 191015–1022.1022 [PubMed]
47. Jorm A F, Scott R, Cullen J S. et al Performance of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) as a screening test for dementia. Psychol Med 1991. 21785–790.790 [PubMed]
48. Jorm A F. A short form of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): development and cross‐validation. Psychol Med 1994. 24145–153.153 [PubMed]
49. Knopman D S, Knudson D, Yoes M E. et al Development and standardization of a new telephonic cognitive screening test: the Minnesota Cognitive Acuity Screen (MCAS). Neuropsychiatry Neuropsychol Behav Neurol 2000. 13286–296.296 [PubMed]
50. Borson S, Scanlan J, Brush M. et al The mini‐cog: a cognitive ‘vital signs' measure for dementia screening in multi‐lingual elderly. Int J Geriatr Psychiatry 2000. 151021–1027.1027 [PubMed]
51. Buschke H, Kuslansky G, Katz M. et al Screening for dementia with the memory impairment screen. Neurology 1999. 52231–238.238 [PubMed]
52. Artero S, Ritchie K. The detection of mild cognitive impairment in the general practice setting. Aging Ment Health 2003. 7251–258.258 [PubMed]
53. Kiernan R J, Mueller J, Langston J W. et al The Neurobehavioral Cognitive Status Examination: a brief but quantitative approach to cognitive assessment. Ann Intern Med 1987. 107481–485.485 [PubMed]
54. De Koning I, Dippel D W, van Kooten F. et al A short screening instrument for poststroke dementia: the R‐CAMCOG. Stroke 2000. 311502–1508.1508 [PubMed]
55. Kalbe E, Kessler J, Calabrese P. Validity of the DemTect and the RDST for the earlier detection of dementia. Abstract presented at the 2nd Annual Meeting of the International College of Geriatric Psychoneuropharmacology, Barcelona, Spain, October 10–12 2002
56. Kalbe E, Calabrese P, Schwalen S. et al The Rapid Dementia Screening Test (RDST): a new economical tool for detecting possible patients with dementia. Dement Geriatr Cogn Disord 2003. 16193–199.199 [PubMed]
57. Belle S H, Mendelsohn A B, Seaberg E C. et al A brief cognitive screening battery for dementia in the community. Neuroepidemiology 2000. 1943–50.50 [PubMed]
58. Mundt J C, Freed D M, Greist J H. Lay person‐based screening for early detection of Alzheimer's disease: development and validation of an instrument. J Gerontol B Psychol Sci Soc Sci 2000. 55163–170.170
59. Callahan C M, Unverzagt F W, Hui S L. et al Six‐item screener to identify cognitive impairment among potential subjects for clinical research. Med Care 2002. 40771–781.781 [PubMed]
60. Koss E, Patterson M B, Ownby R. et al Memory evaluation in Alzheimer's disease. Caregivers' appraisals and objective testing. Arch Neurol 1993. 5092–97.97 [PubMed]
61. Maki N, Ikeda M, Hokoishi K. et al Interrater reliability of the short‐memory questionnaire in a variety of health professional representatives. Int J Geriatr Psychiatry 2000. 15373–375.375 [PubMed]
62. Katzman R, Brown T, Fuld P. et al Validation of a short Orientation‐Memory‐Concentration Test of cognitive impairment. Am J Psychiatry 1983. 140734–739.739 [PubMed]
63. Fillenbaum G G, Heyman A, Wilkinson W E. et al Comparison of two screening tests in Alzheimer's disease. The correlation and reliability of the Mini‐Mental State Examination and the modified Blessed test. Arch Neurol 1987. 44924–927.927 [PubMed]
64. Stuss D T, Meiran N, Guzman D A. et al Do long tests yield a more accurate diagnosis of dementia than short tests? A comparison of 5 neuropsychological tests. Arch Neurol 1996. 531033–1039.1039 [PubMed]
65. Davous P, Lamour Y, Debrand E. et al A comparative evaluation of the short orientation memory concentration test of cognitive impairment. J Neurol Neurosurg Psychiatry 1987. 501312–1317.1317 [PMC free article] [PubMed]
66. Pfeiffer E. A short portable mental status questionnaire for the assessment of organic brain deficit in elderly patients. J Am Geriatr Soc 1975. 23433–441.441 [PubMed]
67. Kokmen E, Naessens J M, Offord K P. A short test of mental status: description and preliminary results. Mayo Clin Proc 1987. 62281–288.288 [PubMed]
68. Froehlich T E, Robison J T, Inouye S K. Screening for dementia in the outpatient setting: the time and change test. J Am Geriatr Soc 1998. 461506–1511.1511 [PubMed]
69. Breitner J C, Welsh K A, Brandt J. et al Telephone screening for dementia—a practical method for population‐based twin studies of Alzheimer's disease and related disorders. Gerontologist 1991. 31333
70. Brandt J, Welsh K A, Breitner J C. et al Hereditary influences on cognitive functioning in older men. A study of 4000 twin pairs. Arch Neurol 1993. 50599–603.603 [PubMed]
71. Reitan R M. Validity of the trail‐making test as an indicator of organic brain damage. Percept Mot Skills 1958. 8271–276.276
72. Isaacs B, Kennie A T. The Set test as an aid to the detection of dementia in old people. Br J Psychiatry 1973. 123467–470.470 [PubMed]
73. Oeksengaard A R, Braekhus A, Laake K. et al The set test as a diagnostic tool in elderly outpatients with suspected dementia. Aging (Milano) 1995. 7398–401.401 [PubMed]
74. Leopold N A, Borson A J. An alphabetical ‘WORLD'. A new version of an old test. Neurology 1997. 491521–1524.1524 [PubMed]
75. Brodaty H, Kemp N M, Low L ‐ F. Characteristics of the GPCOG, a screening tool for cognitive impairment. Int J Geriatr Psychiatry 2004. 19870–874.874 [PubMed]
76. Fujii D, Hishinuma E, Masaki K. et al Dementia screening: can a second administration reduce the number of false positives? Am J Geriatr Psychiatry 2003. 11462–465.465 [PubMed]
77. Antonelli Incalzi R, Cesari M, Pedone C. et al Construct validity of the abbreviated mental test in older medical inpatients. Dement Geriatr Cogn Disord 2003. 15199–206.206 [PubMed]
78. Bland R C, Newman S C. Mild dementia or cognitive impairment: the Modified Mini‐Mental State examination (3MS) as a screen for dementia. Can J Psychiatry 2001. 46506–510.510 [PubMed]
79. Hayden K M, Khachaturian A S, Tschanz J T. et al Characteristics of a two‐stage screen for incident dementia. J Clin Epidemiol 2003. 561038–1045.1045 [PubMed]
80. O'Connell M E, Tuokko H, Graves R E. et al Correcting the 3MS for bias does not improve accuracy when screening for cognitive impairment or dementia. J Clin Exp Neuropsychol 2004. 26970–980.980 [PubMed]
81. Borson S, Brush M, Gil E. et al The Clock Drawing Test: utility for dementia detection in multiethnic elders. J Gerontol A Biol Sci Med Sci 1999. 54M534–M540.M540 [PubMed]
82. Cullen B, Fahy S, Cunningham C J. et al Screening for dementia in an Irish community sample using MMSE: a comparison of norm‐adjusted versus fixed cut‐points. Int J Geriatr Psychiatry 2005. 20371–376.376 [PubMed]
83. Huppert F A, Cabelli S T, Matthews F E. Brief cognitive assessment in a UK population sample—distributional properties and the relationship between the MMSE and an extended mental state examination. BMC Geriatr 2005. 57
84. Tangalos E G, Smith G E, Ivnik R J. et al The Mini‐Mental State Examination in general medical practice: clinical utility and acceptance. Mayo Clin Proc 1996. 71829–837.837 [PubMed]
85. Agrell B, Dehlin O. Mini Mental State Examination in geriatric stroke patients. Validity, differences between subgroups of patients, and relationships to somatic and mental variables. Aging (Milano) 2000. 12439–444.444 [PubMed]
86. Nys G M, Van Zandvoort M J, De Kort P L. et al Restrictions of the Mini‐Mental State Examination in acute stroke. Arch Clin Neuropsychol 2005. 20623–629.629 [PubMed]
87. Tang W K, Mok V, Chan S S. et al Screening of dementia in stroke patients with lacunar infarcts: comparison of the mattis dementia rating scale and the Mini‐Mental State Examination. J Geriatr Psychiatry Neurol 2005. 183–7.7 [PubMed]
88. Kokmen E, Smith G E, Petersen R C. et al The Short Test of Mental Status. Correlations with standardized psychometric testing. Arch Neurol 1991. 48725–728.728 [PubMed]
89. Tang‐Wai D F, Knopman D S, Geda Y E. et al Comparison of the Short Test of Mental Status and the Mini‐Mental State Examination in mild cognitive impairment. Arch Neurol 2003. 601777–1781.1781 [PubMed]
90. Solomon P R, Brush M, Calvo V. et al Identifying dementia in the primary care practice. Int Psychogeriatr 2000. 12483–493.493 [PubMed]
91. Meulen E F, Schmand B, van Campen J P. et al The seven minute screen: a neurocognitive screening test highly sensitive to various types of dementia. J Neurol Neurosurg Psychiatry 2004. 75700–705.705 [PMC free article] [PubMed]
92. Sarasqueta C, Bergareche A, Arce A. et al The validity of Hodkinson's Abbreviated Mental Test for dementia screening in Guipuzcoa, Spain. Eur J Neurol 2001. 8435–440.440 [PubMed]
93. Flicker L, Logiudice D, Carlin J B. et al The predictive value of dementia screening instruments in clinical populations. Int J Geriatr Psychiatry 1997. 12203–209.209 [PubMed]
94. Sahadevan S, Lim P P, Tan N J. et al Diagnostic performance of two mental status tests in the older chinese: influence of education and age on cut‐off values. Int J Geriatr Psychiatry 2000. 15234–241.241 [PubMed]
95. Harwood D M, Hope T, Jacoby R. Cognitive impairment in medical inpatients. I: Screening for dementia—is history better than mental state, Age Ageing 1997. 2631–35.35 [PubMed]
96. Borson S, Scanlan J M, Chen P. et al The Mini‐Cog as a screen for dementia: validation in a population‐based sample. J Am Geriatr Soc 2003. 511451–1454.1454 [PubMed]
97. Borson S, Scanlan J M, Watanabe J. et al Simplifying detection of cognitive impairment: comparison of the Mini‐Cog and Mini‐Mental State Examination in a multiethnic sample. J Am Geriatr Soc 2005. 53871–874.874 [PubMed]
98. Rhee J A, Chung E K, Shin M H. Validating the Time and Change test to screen for dementia in elderly Koreans. BMC Public Health 2004. 452
99. Mussi C, Foroni M, Valli A. et al The “time and change” test: an appropriate method to detect cognitive decline in the elderly. J Geriatr Psychiatry Neurol 2002. 1512–15.15 [PubMed]
100. Inouye S K, Robison J T, Froehlich T E. et al The Time and Change Test: A simple screening test for dementia. J Gerontol A Biol Sci Med Sci 1998. 53m281–m286.m286 [PubMed]
101. Morales J M, Bermejo F, Romero M. et al Screening of dementia in community‐dwelling elderly through informant report. Int J Geriatr Psychiatry 1997. 12808–816.816 [PubMed]
102. Law S, Wolfson C. Validation of a French version of an informant‐based questionnaire as a screening test for Alzheimer's disease. Br J Psychiatry 1995. 167541–544.544 [PubMed]
103. Tang W K, Chan S S, Chiu H F. et al Can IQCODE detect poststroke dementia? Int J Geriatr Psychiatry 2003. 18706–710.710 [PubMed]
104. Mackinnon A, Khalilian A, Jorm A F. et al Improving screening accuracy for dementia in a community sample by augmenting cognitive testing with informant report. J Clin Epidemiol 2003. 56358–366.366 [PubMed]
105. Louis B, Harwood D, Hope T. et al Can an informant questionnaire be used to predict the development of dementia in medical inpatients? Int J Geriatr Psychiatry 1999. 14941–945.945 [PubMed]
106. Lenger V, De Viliers C, Louw S J. Informant questionnaires as screening measures to detect dementia. A pilot study in the South African context. S Afr Med J 1996. 86737–741.741 [PubMed]
107. Maki N, Ikeda M, Hokoishi K. et al The validity of the MMSE and SMQ as screening tests for dementia in the elderly general population—a study of one rural community in Japan. Dement Geriatr Cogn Disord 2000. 11193–196.196 [PubMed]
108. Maki N, Ikeda M, Hokoishi K. et al Validity of the Short‐Memory Questionnaire in vascular dementia. Int J Geriatr Psychiatry 2000. 151143–1146.1146 [PubMed]
109. Baddeley A, Wilson B A. Prose recall and amnesia: implications for the structure of working memory. Neuropsychologia 2002. 401737–1743.1743 [PubMed]
110. Donald A, Van Til L. Evaluating screening tests for dementia and cognitive impairment in a heterogeneous population in the presence of verification bias. Int Psychogeriatr 2001. 13203–214.214 [PubMed]

Articles from Journal of Neurology, Neurosurgery, and Psychiatry are provided here courtesy of BMJ Publishing Group