|Home | About | Journals | Submit | Contact Us | Français|
Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data (N = 8,813; ages 17–97 years) aggregated across 38 separate studies, MDS was applied to 16 cognitive variables representative of five well-established cognitive abilities. Parallel to confirmatory factor analytic solutions, and consistent with past MDS applications, the results for young (18–39 years), middle (40–65 years), and old (66–97 years) adult age groups consistently revealed a two-dimensional radex disk, with variables from fluid reasoning tests located at the center. Using a new method, target measures hypothesized to reflect three aspects of cognitive control (updating, storage-plus-processing, and executive functioning) were projected onto the radex disk. Parallel to factor analytic results, these variables were also found to be centrally located in the cognitive ability space. The advantages and limitations of the radex representation are discussed.
Construct validation is a primary goal for many areas of developmental research. In aging research, for example, levels of performance on many diverse sorts of cognitive tasks are often found to decline with adult age. While it is tempting to hypothesize a distinct mechanism as the source of the age-related deficits observed in each of these tasks, it is more likely that age-related deficits do not occur on specific variables, but rather, on more general constructs that they represent. It is therefore important, before constructing a novel explanation for age-related differences observed in a newer variable, to characterize how that variable relates to previously well-established individual differences variables. In this paper we present and compare two such methods of construct validation. One is based on confirmatory factor analysis and the other is based on multidimensional scaling. We apply both methods to examine how a number of diverse “target” variables hypothesized to reflect various aspects of a construct termed cognitive control relate to five well-established “reference” cognitive ability domains.
Cognitive control can be defined as deliberate, on-line, processing involved in simultaneously storing and manipulating information, and monitoring, updating, and modifying the contents of conscious thought. Cognitive control processes have often been invoked in theoretical explanations of ability test performance (see, e.g., Conway, Kane, & Engle, 2003; Oberauer, Schulze, Wilhelm, & Süß, 2005). Some (e.g. Hambrick, Kane, & Engle, 2005) have conceptualized cognitive control as comprised of general processes that can be applied during many diverse sorts of tasks. Others (e.g. Baddeley, 1986; Park, Lautenschlager, Hedden, Davidson, Smith, & Smith, 2002; Shah & Miyake, 1996), however, have conceptualized cognitive control as a family of specialized processes specific to different domains of functioning. Here, we examine a number of measures of cognitive control that involve either figural/spatial or verbal/numeric materials. Using the two construct validation methods described next, we examine how each of these measures relate to five well-established cognitive ability domains: Fluid Reasoning, Episodic Memory, Spatial Visualization, Processing Speed, and Verbal Knowledge. We are interested in whether these cognitive control measures demonstrate a uniform pattern of relations to the ability domains (consistent with the domain-general view) or a more heterogeneous pattern of relations to different ability domains (consistent with the domain-specific view).
Factor analysis is likely the most frequently employed method for examining the structure of cognitive ability interrelations, with the typical solution being that of a hierarchical organization (Carroll, 1993) with a general factor accounting for about half of the between-person variation in test performance, and broad abilities accounting for approximately 25% additional variation in such performance. However, multidimensional scaling (MDS) methods can provide alternative representations of ability interrelations that are visual in nature and intuitively appealing.
Applying MDS procedures and close variants to represent intercorrelation matrices in Euclidean space was first proposed by Guttman (1954) and popularized by Snow and colleagues (Marshalek, Lohman, & Snow, 1983; Snow, Kyllonen, & Marshalek, 1984). MDS represents variables as points in space, with the distances between point-representations corresponding to the magnitudes of the variables’ interrelations, such that more highly related variables are spatially closer. The most common solution for cognitive abilities is that of a radex circular disk described by two continua on which variables are located: (1) the circumplex refers to the angle along the circle on which the variables are located (e.g. 0° = “North,” 90° =”East,” 180° = “South,” 270° = “North-West,” etc…), and (2) the simplex refers to how far away from the circle’s center the variables are located. By mathematical necessity, the radex center contains those variables with the strongest average relations with all other variables (Marshalek et al., 1983; Snow et al. 1984).
Whereas factor analytic approaches lend themselves to conceptualizing abilities as sharply distinct from (albeit possibly correlated with) one another, MDS approaches emphasize the continuity of the ability spectrum. In fact, Snow et al. (1984, p. 89) have argued that “the fact that cognitive tasks or objects in general can be shown to differ along many dimensions simultaneously, and that this ordering can be captured in a two… dimensional scaling representation, is one of the most powerful features of [MDS].” Nevertheless, the radex has been shown to be remarkably parallel to the hierarchal factor solution (Marshalek et al, 1983). These key commonalities are highlighted in Figure 1, which displays idealized versions of both the hierarchical factor model and the radex model (the variable “T” refers to the “target” variable to be validated, and is discussed later). In the factor model displayed in the left portion of Figure 1, the common “G” factor and the broad abilities (Memory, Spatial Visualization, Processing Speed, Verbal Knowledge) are orthogonalized. The loadings on the broad abilities correspond to where along the circumplex a variable falls, whereas the magnitude of a variable’s loadings on the G factor corresponds with where on the simplex that variable falls (higher G loadings correspond with more central locations). A common finding (e.g. Gustafsson, 1988, Salthouse, 2004) is that Fluid Reasoning (Gf) is indistinguishable from G. This is indicated in the left portion of Figure 1, in which no Gf factor is instantiated. It is also indicated in the right portion of Figure 1, in which the Gf measures are very close to the exact center of the radex disc.
Implementations of MDS in cognitive ability research have been surprisingly few. One possible reason for this underutilization is that MDS methods have remained largely exploratory in nature (although see Borg and Groenen, 2005, for a discussion of confirmatory variants of MDS), in contrast to factor analytic methods, for which confirmatory variants have been well developed. The exploratory nature of MDS can be seen as an advantage because, relative to factor analysis, fewer prior expectations need to be imposed when performing an MDS procedure, thereby allowing variables to take on any location within the space, rather than merely evaluating the extent to which a pre-specified hypothesis accords with the data. Using such an unrestrictive approach could potentially reveal “insights that classical factor analytic techniques seem to have hidden” (Sternberg, 1984, p. xii).
Nonetheless, in order to ensure cumulative progress in developmental theory and research, to integrate emerging research with the extant state of psychological science, and to fully understand a new construct’s meaning, it is important for emerging research to build upon well-established information. This perspective has been articulated by Cronbach and Meehl (1955) who maintained that a construct (or the observable indicators of it) should be validated by establishing a “nomological network” of associations that it has with existing variables and constructs. Two ways by which this can be achieved are by incorporating variables representative of a newly hypothesized construct into the well-established factor analytic and radex solutions. The left hand portion of Figure 1 displays such a factor analytic approach (cf. Salthouse, Pink, & Tucker-Drob, 2008) in which a target variable representative of a newly hypothesized construct is regressed onto the five well-established abilities. By examining the magnitude of the standardized coefficients labeled βability, one can infer the extent to which the target variable “T” is uniquely related to each of the well-established “reference” abilities. Similarly, by examining the location of “T” after projecting it onto the radex space, as displayed in the right hand portion of Figure 1, one can infer the pattern of relations that it has with the reference abilities.
For the current project the analytic approaches depicted in Figure 1 are applied to two sets of target variables. The first set of target variables consists of alternative indicators of the reference abilities. Only if the analytical procedures produce results consistent with the established characteristics of these variables can the procedures be considered plausible. The second set of target variables includes a number of tests of cognitive control. These tests have been conceptualized from neuropsychological perspectives as measures of executive functioning, and from the cognitive tradition as measures of simultaneous storage-plus-processing, and updating of continuously changing information. We use confirmatory factor analysis to regress the variables onto the reference abilities, and similarly use multidimensional scaling to map these variables onto the reference radex space. Our primary interest is whether different cognitive control variables display different patterns of relations to the reference abilities according to whether they make use of figural/spatial or verbal/numeric information. We are also interested in whether the patterns differ according to whether the tasks are those primarily used by neuropsychologists (as executive functioning measures), or by cognitive psychologists (as storage-plus-processing, or updating measures).
In order to verify that these examinations are meaningful, it is important to examine whether the reference solutions are consistent across age groups. There is a robust literature on the importance of structural stability in developmental research (e.g. Horn & McArdle, 1992). In short, both across-age comparisons and across-age aggregation may be questionable if the multivariate structure of the variables of interest differs with age. Here, we examine the stability of both the reference factor solution and the reference radex solution, before proceeding to employ these solutions in validating the target variables.
Analyses are based on data from a total of 8,813 different individuals from 38 different studies conducted in Timothy A. Salthouse’s laboratory. Each study included two or more measures from a set of 16 cognitive measures representative of five theoretical cognitive abilities. Many studies also included alternative measures of the five cognitive abilities, and various measures of updating, storage-plus-processing, and executive functioning. Because not all measures were administered for each study (and some variables were included in more studies than were others), this aggregate dataset has a great deal of missing data. However, because the pattern of missingness is due to the study to which participants were assigned, rather than observed or unobserved participant characteristics, the data can be assumed to be missing at random. This allows for powerful methods for handling the missing data, based on full information maximum likelihood (FIML) estimation (e.g. Salthouse, 2004).
Participants were community-dwelling adults spanning the continuous range of 17 to 97 years (mean 49.5, standard deviation 17.6). Sixty percent were female, and they had 15.4 years of education on average (standard deviation 2.6).
Table 1 contains descriptions of the sixteen reference cognitive measures used to establish the reference cognitive ability space. These measures are representative of the following well-established cognitive abilities (Carroll, 1993; Salthouse, 2004): Fluid Reasoning, Spatial Visualization, Processing Speed, Episodic Memory, and Verbal Knowledge.
Table 2 contains descriptions of the target variables. These consist of (1) alternative indicators of the cognitive abilities represented by the reference space, which are used to demonstrate the utility of the analytical procedure; and (2) three classes of cognitive control tasks: a) cognitive tasks requiring continuous updating of information, (b) cognitive tasks requiring simultaneous storage-plus-processing and (c) neuropsychological tasks hypothesized to measure executive functioning. In reading the descriptions of each of the cognitive control tasks, it is apparent that either verbal/numeric or figural/spatial materials are employed.
The analytical procedures depicted in left portion of Figure 1 were employed with Mplus software using FIML estimation. For each target variable, a new model was estimated in which that variable was regressed onto the five reference abilities. That is, each model had one of the variables listed in Table 2, regressed onto the 5 reference abilities indicated by the sixteen variables listed in Table 1.
A reference radex solution was produced from the intercorrelation matrix of the 16 cognitive variables described in Table 1, using the SAS Proc MDS (multidimensional scaling) procedure, with two dimensions specified in order to remain consistent with past research. In order to use all available information, the correlation matrix was derived with FIML estimation using Mplus software. To capitalize on the large sample sizes and correspondingly precise parameter estimates, metric MDS was chosen (although nonmetric MDS produced a nearly identical solution). Next, an algorithm was applied to individually map each target variable onto this reference space. Whereas simply including each variable in the original MDS analysis could potentially distort the reference space, this method allows the reference space to remain fixed while the optimal location for the target variable is determined.
The fixed space can be represented as a 16 by 2 matrix of Cartesian coordinates, [Xv,Yv] with each row corresponding to the location of a reference variable. The to-be-determined location of the target variable can similarly be represented by coordinates (x,y). The Pythagorean theorem can be used to determine the 16 unit vector, Dv, of distances between (x,y), and [Xv,Yv],
The coordinates (x,y) are then determined by maximizing the correspondence between the reference variable-target variable distances, Dv, and the reference variable-target variable correlations, Rv, also a 16 unit vector. This correspondence is termed the Distance Correlation, and is given by
Note that the multiplier −1 is included so that higher correlations correspond to shorter distances. For all of the target variables examined for the current project, the absolute magnitude of the maximized Distance Correlation was greater than .85, indicating good fit.
The orthogonal factor solution for the 16 reference cognitive variables is presented in Table 3. It can be seen that all reference variables load significantly on both the G factor and the ability that they were designed to measure. The parallel radex solution for the 16 reference cognitive variables is depicted in the top portion of Figure 2, with lines drawn to connect variables representing the same ability. Approximately concentric circles are drawn to indicate the progression of G-loading from the center to the periphery of the radex. Axes are drawn in the regions corresponding to Spatial Visualization, Verbal Knowledge, Episodic Memory, and Processing Speed abilities.
Before proceeding, we examined whether solutions differed for adults of different ages. Age-partialled factor analytic and radex solutions were therefore produced for the following three age groups: 18–39 years (N=2,812), 40–65 years (N=4,055), and 66–97 years (N=1,946). The factor solutions for the three age groups are given parenthetically in Table 3, and the radex solutions for the three age groups are depicted in the bottom panel of Figure 2. It can be seen that both the factor and the radex solutions were quite consistent across these age groups. This consistency was numerically indexed using two statistics. The congruence coefficient (see Jensen, 1998) is an index of the correspondence between two factor solutions. The bidimensional correlation is an index of the correspondence between two two-dimensional (in this case, radex) maps. Both statistics are on the same scale as the Pearson correlation, with absolute magnitudes ranging from 0 (no correspondence) to 1 (perfect correspondence). The magnitudes of all coefficients were above .96, indicating high correspondence among solutions (c.f. Tucker-Drob & Salthouse, 2008). The consistency of these solutions across age groups justifies the employment of a single solution based on data aggregated across all ages.
Table 4 reports the standardized relations between the reference abilities and the target variables, and Figure 3 displays the target variables plotted within the reference space. The factor analytic approach shows that apart from their associations with G, the alternative indicators are most strongly related to the abilities that they were designed to measure. Similarly, the left portion of Figure 3, illustrates that the radex approach places the alternative indicators in the topographical regions corresponding to their corresponding abilities. Because these results are consistent with those patterns expected based on the known properties of the target variables, we can conclude that the analytical approaches operate very well.
Now we examine the results for target variables representative of cognitive control. It can be seen from Table 4 that the cognitive control variables (Storage-plus-Processing, Updating, and Executive Functioning variables) are all characterized by strong relationships to G, and very weak relations to all other abilities. Similarly, it can be seen from the right portion of Figure 3 that all of the cognitive control variables occupy the central region of the radex space. Neither approach produces clear evidence that the cognitive control variables are differentially related to the reference abilities according to whether they employ verbal/numeric or figural/spatial material, or whether they correspond to storage-plus-processing, updating, or executive functioning.
We agree with Snow et al.’s (1984, p. 48) argument that theories of ability organization are “at least partly determined by the techniques used to analyze the interrelationships.” That is, just as it is important to test hypotheses in a variety of populations, using various operationalizations (e.g. multiple indicators), it is also important to test hypotheses using alternative analytical methods. Turkheimer, Ford, and Oltmanns (2008) have similarly argued that any structural taxonomy is “meaningful but arbitrary” at best, and is only useful insofar as it communicates and emphasizes the important and salient features of the multivariate data. By applying two parallel methodologies, we were able to identify such features.
Using factor analysis and multidimentional scaling we examined the relations of newer “target” variables and more well-established cognitive abilities. Our target variables included both cognitive and neuropsychological variables thought to be representative of various aspects of cognitive control. Both methods indicated that the cognitive control variables were characterized by pronounced relations to individual differences that are general to many cognitive ability domains. These findings call into question views that cognitive control encompasses a family of independent domain-specific processes (e.g. Park et al., 2002; Shah & Miyake, 1996).
The multidimensional scaling method presented here is promising for many sorts of construct validation applications, but it is not without limitations. One notable issue is that the imposition of a two dimensional structure prohibits target variables exhibiting certain patterns of relations from being represented as single points. A verbal fluency task, for example, might represent a mixture of processing speed and verbal knowledge. Without allowing for a third dimension, no single point could adequately account for such a relational pattern. This would be indicated by a poor fit of that variable’s point estimate.
Examination of the fit function for potentially problematic variables may be particularly valuable. Figure 4 depicts the fit function (i.e. the distance correlation plotted according to possible x and y coordinates) for the Reading Span variable. It can see that the fit for this variable is optimized at the central region of the radex, where a well-defined peak is present. Flatness or bimodality of this function would have undermined interpretability of a single point representation of the target variable.
A second limitation of the radex approach is that the circumplex and the simplex are not independent. That is, the radex presumes that higher G loadings are associated with less domain specificity. While this is the pattern that has emerged based on multidimensional scaling of existing psychometric ability data, it is possible that new variables could be both highly domain specific and highly G loaded. Fixing the radex space, as was done here could serve to obscure such a finding.
Finally, while the visual nature of MDS can be considered one of its strengths, the approach is primarily qualitative. The capabilities to perform specific hypothesis testing and nested model comparisons that are commonplace in factor analytic research, are not as well developed for MDS.
In summary, parallel factor analytic and MDS solutions were produced for 16 cognitive variables representative of five well-established ability domains. Both structures were found to be consistent across young, middle, and old adult age groups. This consistency allowed us to aggregate the data across ages, and examine the meaning of cognitive control variables in the context of these well-established, age-invariant, solutions. Both methods demonstrated that diverse measures of cognitive control are primarily related to general cognitive ability, regardless of the material used, or the psychological tradition from which they were derived.
This research was supported by National Institute on Aging Grants R37AG02427042 and R01AG19627 to Timothy A. Salthouse. Elliot M. Tucker-Drob was supported as a trainee by National Institute on Aging Grant T32AG020500. We appreciate valuable suggestions made by Eric Turkheimer and John R. Nesselroade.