|Home | About | Journals | Submit | Contact Us | Français|
The NeuroTrax Mindstreams computerized cognitive assessment system was designed for widespread clinical and research use in detecting mild cognitive impairment (MCI). However, the capability of Mindstreams tests to discriminate elderly with MCI from those who are cognitively healthy has yet to be evaluated. Moreover, the comparability between these tests and traditional neuropsychological tests in detecting MCI has not been examined.
A 2-center study was designed to assess discriminant validity of tests in the Mindstreams Mild Impairment Battery. Participants were 30 individuals diagnosed with MCI, 29 with mild Alzheimer's disease (AD), and 39 healthy elderly. Testing was with the Mindstreams battery and traditional neuropsychological tests. Receiver operating characteristic (ROC) analysis was used to examine the ability of Mindstreams and traditional measures to discriminate those with MCI from cognitively healthy elderly. Between-group comparisons were made (Mann-Whitney U test) between MCI and healthy elderly and between MCI and mild AD groups.
Mindstreams outcome parameters across multiple cognitive domains significantly discriminated among MCI and healthy elderly with considerable effect sizes (p < 0.05). Measures of memory, executive function, visual spatial skills, and verbal fluency discriminated best, and discriminability was at least comparable to that of traditional neuropsychological tests in these domains.
Mindstreams tests are effective in detecting MCI, providing a comprehensive profile of cognitive function. Further, the enhanced precision and ease of use of these computerized tests make the NeuroTrax system a valuable clinical tool in the identification of elderly at high risk for dementia.
Mild cognitive impairment (MCI) is the term applied to a condition in which elderly individuals who have a subjective cognitive complaint have objective memory impairment in the absence of functional disability [1-3]. Its importance arises from the observation that it often constitutes the clinical state between normal cognition and dementia in the elderly . Approximately 12–15% of MCI subjects per year convert to clinical dementia with functional disability [4,5]. For this reason, much interest has centered on the development of standardized techniques for quantification of cognitive deficits in MCI and potential therapeutic interventions for treatment of these high-risk individuals .
While memory impairment is the hallmark of MCI , multiple cognitive domains are compromised in the majority of MCI individuals [7-9], as in those with mild Alzheimer's disease (AD). Therefore, sufficiently broad and sensitive instruments are key in the effective diagnosis of these two groups. While traditional neuropsychological tests have been shown to discriminate among individuals with MCI and cognitively healthy elderly [10-12], there is no standard, comprehensive neuropsychological battery that is suitable for the screening and follow-up of mild impairments in routine clinical care. Moreover, while paper-based neuropsychological batteries have been applied in research settings, they are generally impractical for widespread clinical use due to high cost and extended administration time.
Computerized cognitive testing has the potential to effectively address the limitations posed by traditional paper-based measures. Technical innovations for accurate measurement of reaction time as well as frequency of errors enhance overall sensitivity, and on-line adjustment of level of difficulty may minimize ceiling or floor effects. Standardized batteries with alternate forms allow for accurate follow-up of patients over time. A computerized testing session can be of shorter duration and is less expensive than a paper-based session. Further, computerized testing can be made widely available via the Internet and easily administered so that high quality testing is on hand to supplement clinical evaluation in routine patient care settings. Indeed computerized tests have been developed to assist researchers and psychologists in screening for cognitive dysfunction [13,14]. We chose to evaluate Mindstreams™ (NeuroTrax Corp, NY), a new commercially available computerized testing system for comprehensive clinical assessment of cognitive impairment, designed primarily for use in the elderly. Specifically, we examine the ability of Mindstreams tests to discriminate individuals with MCI from among cognitively healthy elderly. The present study is the first to assess discriminant validity of the computerized tests as compared with that of traditional neuropsychological tests.
Participants were 98 elderly individuals assessed at two tertiary care memory clinics (Bloomfield Centre for Research in Aging, McGill-Jewish General Hospital, Montreal, Canada; Memory Disorders Clinic, Shaare Zedek Medical Center, Jerusalem, Israel). Participants were diagnosed by consensus of evaluation teams led by dementia experts at each of the sites and were diagnosed with mild cognitive impairment (MCI), mild Alzheimer's disease (AD), or as cognitively healthy. Diagnosis of MCI followed Petersen et al.  and included the following features: (1) a complaint of defective memory; (2) normal activities of daily living; (3) a memory deficit documented on mental status evaluation and supported by abnormalities on neuropsychological testing; and (4) absence of dementia. These criteria define the subtype of MCI known as 'MCI-amnestic' . Diagnosis of mild AD was according to the Diagnostic and Statistical Manual, 4th ed. (DSM IV). Healthy elderly had no cognitive complaints and were volunteers for research testing. Each diagnostic group was taken to be representative of a distinct population defined by the criteria outlined above. Ethics Committee approval in compliance with the Declaration of Helsinki was obtained at both testing sites, and informed consent was obtained from all participants.
Participants were excluded if there was a prior history of major psychiatric disorder, major depression, or any neurological disorder. All participants were native English speakers. Demographic and clinical characteristics for each diagnostic group are presented in Table Table1.1. The MCI group was older (t67 = 2.091, p = 0.040) and had fewer years of education (t67 = 2.391, p = 0.020) relative to the healthy elderly group. Conversely, the MCI group was younger (t57 = 2.276, p = 0.027) and more highly educated (t67 = 2.359, p = 0.022) than the mild AD group. The MCI and healthy elderly groups were comparable in terms of gender and handedness (gender: χ2[1, N = 69] = 3.757, p = 0.053; handedness: χ2[1, N = 69] = 1.319, p = 0.251) as were the MCI and mild AD groups (gender: χ2[1, N = 59] = 0.827, p = 0.363; handedness: χ2[1, N = 59] = 0.388, p = 0.533). Fewer MCI participants had prior computer experience relative to the healthy elderly group (χ2 [1, N = 69] = 20.046, p = 0.000), but more participants with MCI had computer experience relative to those with mild AD (χ2[1, N = 59] = 3.863, p = 0.049). Between-site differences across diagnostic groups were found for years of education (t96 = 4.315, p = 0.000), gender (χ2[1, N = 98] = 7.936, p = 0.005), and computer experience (χ2[1, N = 98] = 12.001, p = 0.001), but not for age (t96 = 0.250, p = 0.802) or handedness (χ2[1, N = 98] = 0.140, p = 0.709). Consistent with the expert consensus diagnoses, MCI participants performed more poorly than healthy elderly on the Mini-Mental State Examination (MMSE ; U = 279, p = 0.000) and on the Alzheimer's Disease Assessment Scale, cognitive subscale (ADAS-cog ; U = 190.500, p = 0.000), but the MCI group performed better than the mild AD group on these measures (MMSE: U = 570.500, p = 0.000; ADAS-cog: U = 588.000, p = 0.000).
All participants were administered the MMSE and the ADAS-cog and completed the Mindstreams™ (NeuroTrax Corp., NY) battery designed to detect mild cognitive impairment (the "Mild Impairment Battery"). Participants at the McGill-Jewish General Hospital site (20 with MCI, 15 healthy elderly, and 19 with mild AD) underwent a neuropsychological battery in addition, comprising standardized tests of memory, executive function, visual spatial skills, and verbal fluency. Memory tests included the Logical Memory subtest of the Wechsler Memory Scale, 3rd Edition (WMS-III) and the Rey Auditory Verbal Learning Test (RAVLT), Version 1. Tests of executive function included the Clock Drawing Test, the Trail Making Test (Part A), the Digit Symbol subtest of the Wechsler Adult Intelligence Scale, 3rd edition (WAIS-III), and the Mental Control subtest of the WMS-III. Visual spatial skills were assessed with the Block Design subtest of the WAIS-III. Tests of verbal fluency included the Boston Naming test, the Controlled Oral Word Association (COWA) test, and the Similarities subscale of the WAIS-III. Paper-based tests at both sites and Mindstreams tests at the Shaare Zedek site and were administered by research assistants blind to diagnosis. Mindstreams tests at the McGill-Jewish General Hospital site were administered by a research assistant aware of the diagnoses, but uninvolved in their determination. Notably, research assistants at both sites were trained not to assist participants during actual testing, but rather to ensure that they sufficiently understood the instructions prior to each Mindstreams test. Paper-based tests were administered first, followed by the Mindstreams Mild Impairment Battery. Both Mindstreams and paper-based tests were administered in English.
A detailed treatment of the NeuroTrax system, including the computerized tests, data processing, and usability considerations appears in a supplementary document (Additional File 1). In brief, Mindstreams consists of custom software that resides on the local testing computer and serves as a platform for interactive cognitive tests that produce precise accuracy and reaction time (millisecond timescale) data. Tests are adaptive, in that the level of difficulty is adjusted accordingly depending upon performance. This feature increases sensitivity and minimizes the prevalence of ceiling effects. Feedback is provided in the practice sessions that precede each test, but not during the actual tests. Web-based administrative features allow for secure entry and storage of patient demographic data. Once tests are run on the local computer, data are automatically uploaded to a central sever, where calculation of outcome parameters from raw single-trial data and report generation occur.
The Mindstreams Mild Impairment Battery (administration time: 45 minutes) samples a wide range of cognitive domains, including memory (verbal and non-verbal), executive function, visual spatial skills, verbal fluency, attention, information processing, and motor skills (see Table Table2).2). The tests that comprise this battery were designed for use with the elderly. All responses were made with the mouse or with the number pad on the keyboard (intuitively similar to the telephone keypad). Participants were familiarized with these input devices at the beginning of the battery, and practice sessions prior to the individual tests prepared them for the specific types of responses required for each test. Outcome parameters varied with each test, as in Table Table2.2. Given the speed-accuracy tradeoff, (e.g., ) a performance index (computed as [accuracy/reaction time]*100) was computed for timed Mindstreams tests in an attempt to capture performance both in terms of accuracy and reaction time (RT). Tests were run in the same fixed order for all participants.
Following are brief descriptions of the Mindstreams tests included in the Mild Impairment Battery:
Ten pairs of words are presented, followed by a recognition test in which one member (the target) of a previously presented pair appears together with a list of four candidates for the other member of the pair. Participants must indicate which word of the four alternatives was paired with the target when presented previously. Four consecutive repetitions of the recognition test are administered during the 'learning' phase. An additional recognition test is administered following a delay of approximately 10 minutes.
Eight pictures of simple geometric objects are presented, followed by a recognition test in which four versions of each object are presented, each oriented in a different direction. Participants are required to remember the orientations of the originally presented objects. Four consecutive repetitions of the recognition test are administered during the 'learning' phase of the test. An additional recognition test is administered following a delay of approximately 10 minutes.
A series of large colored stimuli are presented at pseudo-random intervals. Participants are instructed to respond as quickly as possible by pressing a mouse button if the color of the stimulus is any color except red, for which no response is to be made.
Pictorial puzzles of gradually increasing difficulty are presented. Each puzzle consists of a 2 × 2 array containing three black-and-white geometric forms with a certain spatial relationship among them and a missing form. Participants must choose the best fit for the fourth (missing) form from among six possible alternatives.
The Stroop is a well-established test of response inhibition . The Mindstreams Stroop test consists of three phases. Participants are presented with a pair of large colored squares, one on the left and the other on the right side of the screen. In each phase, participants are instructed to choose as quickly as possible which of the two squares is a particular color by pressing either the left or right mouse button, depending upon which of the two squares is the correct color. First, participants are presented with a general word in colored letters. In the next phase (termed the Choice Reaction Time test), participants are presented with a word that names a color in white letters. In the final phase (the Stroop phase), participants are presented with a word that names a color, but the letters of the word are in a color other than that named by the word. The instructions for the final phase are to choose the color of the letters, and not the color named by the word.
Pictures of common objects of low and high familiarity are presented. Participants are instructed to select the name of the picture from four choices. In a related test, participants are instructed to select the word that best rhymes with the name of the picture.
Computer-generated scenes containing a red pillar are presented. Participants are instructed to imagine viewing the scene from the vantage point of the red pillar. Four alternative views of the scene are presented as choices.
This test comprises three levels of information processing load: single digits, two-digit arithmetic problems (e.g., 5-1), and three-digit arithmetic problems (e.g., 3+2-1). For each of the three levels, stimuli are presented at three different fixed rates, incrementally increasing as testing continues. Participants are instructed to respond as quickly as possible by pressing the left mouse button if the digit or result is less than or equal to 4 and the right mouse button if it is greater than 4.
Participants are instructed to tap on the mouse button for 12 seconds with their dominant hand. This task is repeated twice.
The Catch game is a novel motor screen that assesses cognitive domains distinct from those in other Mindstreams tests. Participants must "catch" a rectangular white object falling vertically from the top of the screen before it reaches the bottom of the screen. Mouse button presses move a rectangular green "paddle" horizontally so that it can be positioned directly in the path of the falling object. The test requires hand-eye coordination, scanning and rapid responses.
Mindstreams data were uploaded to the NeuroTrax central server, where automatic data processing occurred, during which aggregate outcome parameters were computed from the raw single-trial data (Additional File 1). Outcome parameters were calculated using custom software that was blind to diagnosis or testing site, and results were relayed to each of the sites for review and analysis. Outcome parameters were computed for each test only when performance on the preceding practice session exceeded a predetermined minimum accuracy. The actual test was not given when practice session performance was below this cutpoint. Given this source of 'missing' data, a minimum of 13 data points was deemed acceptable for inclusion of a group in statistical analyses.
All statistics were computed with SPSS statistical software (SPSS, Chicago, IL). Two-tailed statistics were used throughout, and p < 0.05 was considered significant. Receiver operating characteristic (ROC) analysis was used to evaluate the ability of Mindstreams outcome parameters and traditional neuropsychological tests to discriminate participants with MCI from cognitively healthy elderly. Area under the curve (AUC), an index of effect size, was the primary result of the ROC analysis. For each measure, the AUC indicated the probability that a randomly selected individual with MCI would perform more poorly than a randomly selected cognitively healthy individual. An AUC of 0.50 indicated no better than chance discriminability, and an AUC of 1.00 indicated perfect discriminability. If the 95% confidence interval around an AUC included 0.50, the measure was unable to discriminate among MCI and healthy elderly at a significance level of p < 0.05. Separate between-group comparisons were made on Mindstreams outcome parameters between MCI and cognitively healthy and between MCI and mild AD. Given heterogeneous variances across these pairs of groups for numerous outcome parameters (Brown-Forsythe test, p > 0.05), the non-parametric Mann-Whitney U was used to make the comparisons.
Results of an ROC analysis measuring the ability of Mindstreams outcome parameters to discriminate MCI from cognitively healthy elderly are presented in Table Table2,2, subdivided by cognitive domain. All Mindstreams memory outcome parameters discriminated significantly, with a maximum AUC of 0.859 for mean accuracy across all recognition test repetitions in the 'learning' phase of the Verbal Memory test. Similarly, all executive function outcome parameters discriminated significantly, with a maximum AUC of 0.810 for the Go-NoGo performance index. All outcome parameters measuring visual spatial skills and verbal fluency discriminated significantly, with a maximum AUC of 0.824 for accuracy on the rhyming portion of the Verbal Function test. Significant discriminability was found for attention outcome parameters from the Go-NoGo test, with an AUC of 0.771 for Go-NoGo RT and 0.706 for Go-NoGo standard deviation of RT. The Choice Reaction Time performance index did not discriminate significantly among MCI and cognitively healthy elderly. Medium- (AUC = 0.783) and high-load (AUC = 0.688) information processing outcome parameters discriminated significantly, but the low-load parameter did not. All motor skills outcome parameters did not discriminate significantly.
Table Table33 presents ROC analysis results for the subset of MCI and healthy elderly participants who received a battery of standardized neuropsychological tests in addition to Mindstreams testing. For each cognitive domain, the ability of Mindstreams outcome parameters to discriminate MCI from cognitively healthy elderly was compared with paper-based neuropsychological tests designed to tap the same domain. Outcome parameters measuring attention, information processing, and motor skills were excluded from this analysis for lack of corresponding traditional tests in these cognitive domains.
As above (Table (Table2),2), the Mindstreams memory outcome parameter that best discriminated MCI from cognitively healthy elderly was accuracy across the 'learning' phase of the Verbal Memory test (AUC = 0.894; Table Table3);3); the best traditional memory test was the WMS-III Logical Memory II subtest (AUC = 0.885). Also as above, the Mindstreams executive function outcome parameter that discriminated best was the Go-NoGo performance index (AUC = 0.840); the best traditional executive function test was the WAIS-III Digit Symbol subtest (AUC = 0.729; Figure Figure1A).1A). The Mindstreams visual spatial outcome parameter discriminated significantly (AUC = 0.778), but the corresponding WAIS-III Block Design subtest did not (Figure (Figure1B).1B). The Mindstreams verbal outcome parameter that discriminated best was accuracy on the naming portion of the Verbal Function test (AUC = 0.837); the best traditional verbal test was the COWA FS test (AUC = 0.768).
Mindstreams outcome parameters discriminated MCI from cognitively healthy and from mild AD (e.g., Figure Figure2).2). Descriptive statistics for outcome parameters in each cognitive domain are presented in Table Table4,4, subdivided by diagnostic group. The results of two Mann-Whitney U tests are shown for each parameter, one comparing MCI and healthy elderly participants and the other comparing MCI and mild AD participants. Results for the MCI/healthy elderly comparison are similar to those in Table Table2.2. For cognitive domains with sufficient data for conclusive results, significant differences between MCI and mild AD participants were found for memory, visual spatial, and verbal outcome parameters. Results were mixed for attention outcome parameters, such that timed Go-NoGo parameters did not significantly discriminate among MCI and mild AD, but the performance index from the Choice Reaction Time test did.
Cognitive assessment is essential to the effective care and treatment of the elderly. Given that the number of elderly is predicted to increase steeply as the baby boomer generation ages , there is an urgent need for standardized cognitive assessment tools that deliver high quality information and are practical for routine clinical use. Traditional paper-based neuropsychological testing is seriously limited by the formidable cost in time and money. We therefore evaluated a novel computerized cognitive testing system, the Mindstreams system (NeuroTrax Corp., NY), which was designed for widespread clinical application in the detection of MCI and mild dementia.
The present study evaluated the discriminant validity of Mindstreams tests in distinguishing individuals with MCI from healthy elderly. Outcome parameters across multiple cognitive domains significantly discriminated among MCI and healthy elderly with considerable effect sizes (Table (Table2).2). Particularly strong results were obtained for outcome parameters assessing memory, executive function, visual spatial skills, and verbal function. Further, effect sizes of the computerized tests in these domains were at least comparable to neuropsychological tests designed to assess the same domains (Table (Table3;3; Figure Figure1).1). Results were mixed for Mindstreams attention and information processing outcome parameters, and those assessing motor skills did not discriminate among MCI and healthy elderly (Table (Table22).
The current findings are consistent with those of studies designed to identify traditional neuropsychological tests that predict conversion to dementia. Many such studies have found standard tests of verbal- and non-verbal memory and executive function to be excellent predictors [10,11,21-23]. Others have found verbal fluency to be a good predictor [24,25], and a recent report by Mapstone et al.  suggests that visual spatial impairment may also predict conversion to dementia. Hence every cognitive domain with strong discriminant validity for Mindstreams outcome parameters in MCI has been associated with prediction of conversion to dementia in studies of traditional tests.
Computerized tests other than Mindstreams have been employed to discriminate MCI from cognitively healthy elderly. Indeed the paired associates learning (PAL) test of the Cambridge Neuropsychological Test Automated Battery (CANTAB) has been shown sensitive to cognitive decline . While demonstrating the general utility of computerized cognitive testing, the CANTAB-PAL is limited in scope, difficult to use, and requires specialized equipment. A brief set of three tests developed by CogState Ltd. and administered serially four times in 3 hours has recently been shown to discriminate among MCI and cognitively healthy elderly on the basis of learning performance . However, the CogState tests fail to provide a comprehensive cognitive profile, consisting exclusively of reaction time tests. Finally, MicroCog , a multi-domain computerized battery, showed good discriminability among participants with mild dementia and cognitively healthy elderly in an initial validity study . However, MicroCog has not been widely used clinically, likely because it tests only selected cognitive domains and must be administered by a trained psychologist .
It is important to note that the results reported in the present study are preliminary. Population based studies with longitudinal follow-up, pathological confirmation of diagnosis, and comparison with a wider array of traditional tests are required to fully establish the validity of the Mindstreams tests in MCI detection. Further, given the between-group differences in age and years of education in the present study, future studies must collect normative data on Mindstreams tests so that performance can be standardized according to age and years of education. Given the between-group difference in computer experience in the present study, subsequent studies will collect more detailed information on participants' facility with the computer in general and with each of the Mindstreams tests in particular. However, the absence of between-group differences on Mindstreams motor skills tests in the current study, those most dependent upon facility with the computer, suggests that differential computer experience did not confound the results. Finally, future work might incorporate test data in the event of a failed practice session. As such data was labeled 'missing' in the present study, the reported results likely underestimate the true discriminant validity of the Mindstreams tests.
An important limitation imposed upon the present study and all studies of MCI arises from lack of consensus regarding the clinical definition of MCI [30,31]. Our MCI participants were selected according to the standard definition in the field , but these criteria for 'MCI-amnestic'  require only memory impairment. Consistent with the present results, individuals classified as 'MCI-amnestic' are often impaired in other cognitive domains [7-9]. A more clinically valid classification of this pre-dementia state may be Aging-Associated Cognitive Decline (AACD; ), which has clearly defined diagnostic criteria and requires impairment in multiple cognitive domains [30,31]. Indeed AACD has recently been validated as a predictor of conversion to dementia [33,34].
Computerized testing has been criticized relative to paper-based testing in terms of technical limitations and appropriateness for clinical use . Perhaps the most pervasive technical limitation is measurement error that varies depending upon the refresh rate of the monitor, the sampling rate of the input device, operating system activities, and the data acquisition software. Mindstreams, which runs under Microsoft Windows, utilizes the DirectX library to minimize imprecision due to operating system activities and data acquisition software to sub-millisecond levels. The remaining sources of error are hardware-dependent and typically result in imprecision on the order of less then 20 milliseconds, still far better than human measurement error. Computerized assessment has also been criticized on the grounds that testing is not customizable for the individual participant. While paper-based tests are indeed more flexible, the inherent lack of uniformity confounds the valid comparison of test results across participants. Further, Mindstreams testing batteries can be customized to suit specific clinical needs. Batteries can be constructed to include only relevant tests, and stimulus presentation parameters can be altered as appropriate for a particular clinical population.
We found Mindstreams tests straightforward to administer and easy for even the mild AD participants to learn. Administration time for the comprehensive testing battery used in this study (45 minutes) was appropriate, and participants were pleased with the positive feedback that the system provided throughout the session. The automatic uploading and scoring of the data streamlined the entire data collection process, and, in our view, these features may lead to widespread adoption of computerized cognitive testing.
The present study is evaluative in that it serves to guide future studies in determining the optimal set of Mindstreams tests and outcome parameters for differentiating among various patient groups. For example, not all information processing outcome parameters discriminated equally among MCI and cognitively healthy elderly (Table (Table2).2). It appears that the level of difficulty associated with the 2-digit arithmetic (i.e., medium load) portion of the Information Processing test discriminated best, while that associated with the single digit (i.e., low load) portion of the test was ineffectual in discriminating. This suggests that level of difficulty is an important consideration in selecting the Mindstreams parameters that best discriminate among groups. Similarly, the mixed pattern of results for attention outcome parameters (i.e., Choice Reaction Time did not discriminate, but Go-NoGo timed outcome parameters did discriminate; Table Table2)2) can be accounted for by inter-task differences in level of difficulty. These observations may guide both clinical research on existing Mindstreams tests and future test development.
The present preliminary study demonstrates the ability of Mindstreams computerized cognitive tests to discriminate individuals with MCI from cognitively healthy elderly. Mindstreams measures of memory, executive function, visual spatial skills, and verbal fluency discriminated best, and discriminability was at least comparable to that of traditional neuropsychological tests in these domains. Our findings and experience with the NeuroTrax system underscore the utility of this novel clinical tool in the diagnosis of MCI and mild dementia in circumstances where full neuropsychological evaluation is unavailable or impractical. Guided by the present results, further work is necessary to examine the suitability of Mindstreams tests for additional clinical and non-clinical validation cohorts and for longitudinal use.
T Dwolatzky: None Declared
V Whitehead: None Declared
GM Doniger: Employee of NeuroTrax Corp.
ES Simon: Employee of NeuroTrax Corp.
A Schweiger: Consultant for NeuroTrax Corp.
D Jaffe: Consultant for NeuroTrax Corp.
H Chertkow: None Declared
This work was partially supported by a grant from NeuroTrax Corporation to the participating clinical institutions. There were no restrictions placed upon the authors regarding the use or publication of the results reported herein. NeuroTrax served to manage collaboration among the institutions where the work was carried out.
TD and HC contributed to study design and oversaw data collection at each of the two sites. Data analysis was performed by GMD and VW, guided by contributions from DJ, ES, AS and HC. Manuscript preparation was handled primarily by GMD and ES, with contributions from HC, AS, and TD. All authors read and approved the final manuscript.
The pre-publication history for this paper can be accessed here:
Mindstreams Cognitive Health Assessment, Version 2.1 (May 2003). A detailed treatment of the NeuroTrax system, including the computerized tests, data processing, and usability considerations.
The authors wish to thank Dr. Ted Miller for contributing to a pilot study leading to the present work. We owe gratitude to Dr. Natalie Phillips for input relating to data analysis and for manuscript review. Shimon Amit and Judy Simon provided excellent technical support throughout the study.