The MATRICS initiative produced a battery of tests, the MCCB, to assess cognitive treatment effects in people with schizophrenia. A set of norms has been published,3
a manual is available,26
and all component measures are available in a bundled kit. In validation studies, the MCCB demonstrated excellent reliability, minimal practice effects, and significant correlations with measures of functional capacity.2
A major empirical question is whether the MCCB would demonstrate these same favorable characteristics when administered in the context of large multisite industry trials for which it was designed.
Two recently completed industry-sponsored, multisite cognition trials have followed the FDA-NIMH-MATRICS guidelines and used the MCCB as a primary outcome measure. These studies provide excellent data for examining the MCCB as a repeated measure and will be used to examine the following psychometric characteristics of the MCCB: test-retest reliability, practice effects, placebo effects, and correlations with functional capacity.
Both studies used the MCCB composite score as the primary outcome measure with 5 assessment points, including 2 assessments prior to the initiation of treatment, which allows a real-world examination of test-retest reliability characteristics. The first study was the above-referenced proof-of-concept trial of R3487 cosponsored by Memory Pharmaceuticals Inc and F. Hoffmann-La Roche, Ltd (http://clinicaltrials.gov/
, trial number: NCT00604760). Two hundred fifteen clinically stable outpatients were enrolled from 22 US sites.27
The MCCB was used to assess cognitive functioning at screening, baseline, and weeks 4, 8, and 10. The MCCB generates 7 cognitive domain scores and a composite score that sums and standardizes data from all 7 domains based upon published normative data.3
scores (mean of T
= 50 and SD = 10) are generated for each domain and the overall composite score. In order for an assessment to be considered sufficiently complete to generate a composite score, data from 5 of 7 domains must be available.26
In this study, functional capacity was assessed with the University of California at San Diego Performance-based Skills Assessment, 2nd edition
; unpublished revision of the UPSA28
), at baseline.
The MCCB had a very low rate of missing data with only 14 of 4300 measures missing and none of the 215 study participants had a missing composite score. The screening and baseline test data were used to evaluate the MCCB composite and domain score test-retest reliability and short-term practice effects. The interval between screening and baseline was 1–2 weeks. The MCCB composite score had excellent test-retest reliability (interrater correlation coefficient [ICC] = 0.88) and sensitivity to impairment at screening (T score = 25.1 ± SD = 11.6) and baseline (T score = 26.7 ± 12.1). The test-retest reliabilities of the individual domain scores were not as high, though the correlations ranged between good (ICC = 0.70) and excellent (ICC = 0.80) for most domains. The only exception was verbal learning (ICC = 0.65), in which alternate forms of the Hopkins Verbal Learning Test were used at each assessment. Z scores were used to assess practice effects for the MCCB composite score and for each domain score. The z scores were calculated by subtracting the mean T score of the measure at screening from the mean T score at baseline and then dividing the difference score by the SD of the measure at the screening visit. The practice effect for the MCCB composite score (z = 0.20) and each of the 7 domains was small (z = 0.01–0.27). The Pearson correlation between the baseline MCCB composite and UPSA-2 total scores was 0.56 (P < .001), confirming the construct validity of the MCCB and the UPSA. The practice effects over an extended period were evaluated using the placebo group baseline, week 4, and week 8 MCCB data. The effect size of the MCCB composite score improvement over this time period was small (z = 0.2). The magnitude of this effect is consistent with the practice effect observed in the total sample comparison of screening and baseline performance data and suggests that repeated administration of the MCCB is associated with minimal practice effects. The reliability between assessments in the placebo group was high (all ICCs >0.90).
The second trial confirmed these findings (http://clinicaltrials.gov/
, trial number: NCT00641745).29
This trial used the MCCB to assess cognition at screening, baseline, week 6, month 3, and month 6. There were 323 participants from 29 US sites. The interval between screening and baseline assessments was more varied, with a median interval of 15 days. Functional capacity was assessed with the UPSA—Brief Version
The severity of cognitive impairment at screening (T
score = 24.7 ± SD = 12.1) and baseline (T
score = 26.9 ± SD = 12.4) was similar to the first trial. The screening to baseline test-retest reliability of the MCCB composite score was identical to the first trial (ICC = 0.88), and the test-retest reliabilities of the domain scores were also very similar. Only 14 test scores were missing out of a total of 6460 test assessments (10 MCCB assessments performed in 323 subjects at 2 occasions; 99.8% complete). At baseline, all 323 participants met MCCB criteria for sufficient data to compute a composite score. Construct validity was again robust because the baseline MCCB composite score was significantly correlated with the baseline UPSA-B
composite score (r
= 0.61, df
= 304, P
< .001). The MCCB composite score practice effect was small (z
These results suggest that the MCCB has proven reliability for multisite clinical trials and clinical relevance for real-world functioning and should continue to serve as the standard for cognitive enhancement studies in schizophrenia. However, these data do not address the issue of whether the MCCB is sensitive to drug treatment-related change. The lack of compounds with proven efficacy for cognitive impairment in schizophrenia precludes the direct assessment of the sensitivity of the MCCB to pharmacological treatment. To date, there are 2 published studies that have assessed the efficacy of experimental pharmacological agents on the MCCB composite score. Neither of these studies demonstrated a significant beneficial effect, but both were nondefinitive owing to either small sample sizes14,31
and/or crossover designs that confounded practice effects with treatment effects.14
Several large-scale studies using the MCCB are currently underway (http://clinicaltrials.gov/
, accessed February 2010), and a wealth of data on the sensitivity of the battery to treatment changes will be available in the next year or 2. However, recent data suggest that the MCCB composite score is sensitive to a behavioral intervention, with an effect size of d
= 0.88 for the PositScience cognitive remediation program compared with a control program of standard video games.32
While the effects of behavioral intervention may differ from the effects of pharmacological intervention, these behavioral data support the notion that the MCCB has the potential to be sensitive to cognitive improvement with pharmacological treatment.