The SIST-M is an efficient structured interview that can be used to generate CDR scores that are reliable and discriminate along the spectrum of mild cognitive deficits (i.e., CDR-SB=0.0–4.0)10
. The SIST-M also provides a scoring grid for each component item, such that a validated algorithm can be applied for generating CDR ratings – a useful application for training purposes. Finally, the 60 items of the SIST-M were adapted to create a convenient informant-report form, the SIST-M-IR. Our results show that the SIST-M produces ratings consistent with those from an expanded CDR interview9
. We observed strong concordance of CDR scores whether we applied an algorithm based on the SIST-M to legacy data or compared SIST-M scores with those from long interviews among subjects in a cross-sectional validation.
Although there are briefer (5–10 minutes) measures of cognition (e.g., MMSE26
), most are based solely on objective performance and cannot be used to address subtle changes and symptoms. A brief informant interview based on the CDR has been developed (the AD829
); it takes ~3 minutes to complete and correlates strongly with the CDR30
. However, this was designed to achieve rapid yet reliable classification of normal cognition (CDR=0) vs. dementia, including mild dementia (CDR ≥ 0.5); the AD8 cannot be used by itself to obtain the 6 CDR ratings and CDR-SB. By contrast, the SIST-M is an interview method for determining ratings in all CDR categories as well as the graded outcome of the CDR-SB. Thus, the SIST-M “system” makes a unique contribution to the existing repertoire of measures: it is a relatively short interview at ~25 minutes, is easy to administer, and yields both the quantitative and qualitative information of the CDR with sensitivity to very mild symptoms.
Another valuable aspect of this study was the development of the SIST-M-IR. Although other informant-based assessments of cognitive symptoms31
are available, these were not designed to map directly to CDR domains. By contrast, the SIST-M-IR yields information necessary to rate each CDR domain. However, we identified important caveats for its use. Informants tended to endorse fewer symptoms on their own than were identified in the context of clinician-guided interviews covering identical items; furthermore, in early stages of cognitive change, informants may be unaware of subtle symptoms or of compensatory measures that a subject himself/herself has adopted in response to challenges. When the SIST-M scoring algorithm was applied to unguided informant reports, there was fair or poor agreement with clinicians’ CDR scores from both short and long interviews. By contrast, when the algorithm was applied to item ratings from the clinician interviews, the ICCs comparing algorithm-based and clinician-rated CDR-SB remained >0.9. This suggests that the algorithm itself was not the primary factor with regard to lack of agreement – but rather the loss of information that occurs when considering only reports from informants. Nevertheless, history is often obtained only from informants in many clinical research settings, for a variety of practical reasons. Our results show that such an approach is likely to underestimate systematically levels of impairment. Obtaining joint information from subject and informant, during a clinician-guided interview, provides the optimal method for detecting early cognitive change.
Limitations of this study must also be recognized. First, our results were likely influenced by differences across CDR interviewers. Although all CDR raters had completed training and certification19
, the clinicians who conducted the long interviews had generally been evaluators in the MAS for longer than those who completed the SIST-M; there may have been some “drift” down in CDR ratings by the newer interviewers. This possibility was suggested by the significant differences on tests of mean differences in scores and concordance asymmetry. Consequently, the overall strong agreement (e.g., κ ≥ 0.70 for global CDR and memory) between the SIST-M and long interview was likely an underestimate of true agreement. Notably, κ statistics were lowest for Orientation (0.51) and HH (0.46); however, this is not surprising, as prior work33
indicated that these two domains are the most difficult to rate and have the lowest agreement with a “gold standard” rater – even among experienced evaluators. A second limitation is that responses on the SIST-M-IR may have been affected by response biases (e.g., global denial or “naysaying”34
); thus, future enhancements, such as intermittent reverse-coding of items, will be considered35, 36
. Finally, the SIST-M and SIST-M-IR were developed in a cohort of well-educated elders; thus, generalizability to less-educated populations has not been established. However, the degree of education of our cohort is consistent with educational attainment observed nationally in other ADC/ADRCs, and it is likely that the instrument calibrated to grading subtle changes in our cohort would perform equally well in other sites.
In summary, the SIST-M is a efficient, easily administered and reliable tool for obtaining CDR scores, and provides particular value in clinical and research settings focused on persons with milder cognitive symptoms. Furthermore, we developed a SIST-M algorithm – a tool that could supplement CDR interview training and/or assist with inter-rater score calibration. Finally, we created the SIST-M-IR for rapid attainment of informant input on symptoms. While not sufficient for independent scoring of the CDR, the SIST-M-IR may prove useful for memory and general cognitive screening in large-scale research or primary care clinical settings. Thus, further work in this regard is warranted as well.