PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Psychol Aging. Author manuscript; available in PMC 2010 July 16.
Published in final edited form as:
PMCID: PMC2904910
NIHMSID: NIHMS219472

Everyday Cognition: Age and Intellectual Ability Correlates

Abstract

The primary aim of this study was to examine the relationship between a new battery of everyday cognition measures, which assessed 4 cognitive abilities within 3 familiar real-world domains, and traditional psychometric tests of the same basic cognitive abilities. Several theoreticians have argued that everyday cognition measures are somewhat distinct from traditional cognitive assessment approaches, and the authors investigated this assertion correlationally in the present study. The sample consisted of 174 community-dwelling older adults from the Detroit metropolitan area, who had an average age of 73 years. Major results of the study showed that (a) each everyday cognitive test was strongly correlated with the basic cognitive abilities; (b) several basic abilities, as well as measures of domain-specific knowledge, predicted everyday cognitive performance; and (c) everyday and basic measures were similarly related to age. The results suggest that everyday cognition is not unrelated to traditional measures, nor is it less sensitive to age-related differences.

The present study was conducted to address two questions regarding older adults' cognitive performance with problems drawn from everyday life. First, the validity of several newly developed cognitive assessments using “everyday” stimuli was assessed. Specifically, in this study we examined the relationships between traditional psychometric measures and a new battery of everyday intellectual tasks. Second, within an ethnically heterogeneous sample of older adults ranging from 60 to over 90 years of age, we investigated whether age differences found with everyday cognition measures were similar to those found with psychometric ability tests or whether the greater familiarity and relevance of tasks using real-world stimuli might attenuate cross-sectional age differences as some theorists have proposed.

For the purposes of this study, everyday cognition was conceptualized as the performance of individuals on problems using natural stimuli (e.g., real food package labels or official documents), and the problems were constructed to be similar to tasks older individuals might actually be called on to perform in their daily lives (e.g., identifying nutrition information or comparing the value of different financial products). This study is embedded in a larger body of research concerned with understanding the cognitive performance of individuals in the context of their daily lives. Specifically, proponents of the ecological approach have maintained that in the real world individuals can draw on domain-relevant experiences and naturalistic motivations to enhance their cognitive performance (e.g., Ceci & Bronfenbrenner, 1991; Neisser, 1978, 1991; Weisz, 1978) and that relatively “acontextual” laboratory-based assessments of cognition may produce an underestimation of true performance competencies (Bronfenbrenner, 1979; Conway, 1991; Wagner, 1986).

Historically, it has also been argued that traditional or academic measures of cognition and intelligence are biased toward optimizing the performance of younger adults who are immersed in contexts, such as school, which enhance “context-free” cognition (e.g., Demming & Pressey, 1957; McClelland, 1973). Taken together, these questions can be summarized as concerns about the external or ecological validity of traditional cognitive assessments (Schaie, 1978).

With regard to cognitive assessments with older adults, many researchers have proposed that real-world measures may become particularly important. Specifically, if traditional psychometric tests are seen as emphasizing academic skills, they may show little about the functioning and competence of individuals who have been removed from school environments for many decades (e.g., Demming & Pressey, 1957; Denney, 1989; Schaie, 1978; Sinnott, 1989; Sternberg, 1997; Willis & Schaie, 1986). From this perspective, which views cognition and intelligence as serving adaptive functions to promote the survival of individuals (e.g., Berg & Klaczynski, 1996; Blanchard-Fields & Chen, 1996; Dixon & Baltes, 1986; Sternberg, 1997), it becomes important to more clearly identify what cognitive challenges are confronted by individuals in the final decades of life and to assess competence in these tasks (Berg, 1989; Cornelius, 1990).

Operationally, investigators examining everyday cognition have tended to vary task definitions along two continua, ability-specificity and domain-specificity, but have seldom articulated where measures have fallen on these continua or how such measures have related to those of other investigators. The Appendix provides our mapping of how selected studies might fall along these continua. In this table, four major classes of measures are considered. Measures assessing many cognitive abilities represent those instruments that do not identify specific mental abilities needed for their successful performance. These instruments are aimed at creating a general problem-solving score. In contrast, measures assessing specific abilities represent those instruments in which researchers have identified specific abilities as necessary for their effective performance. Here, the term ability is used loosely (see Sternberg, 1986) to include those measures that are hypothesized to tap specific domains of knowledge, including wisdom and tacit (or professional) knowledge; thus, the table refers to everyday applications of basic or primary mental abilities. In the second part of the table, measures are subdivided by those representing many domains of everyday functioning (i.e., measures in which specific domains were not identified or in which multiple domains were presented) versus those representing more delimited categories of everyday tasks.

The Appendix clearly shows that substantial heterogeneity has characterized the measurement approaches used in this literature. Consequently, it is has been difficult to derive a general set of propositions regarding the relationship between traditional and everyday cognitive measures or regarding the adult developmental trajectory of everyday cognition. Empirically, it seems that different measures of everyday problem solving do not relate well to one another (Marsiske & Willis, 1995).

Turning to the first major question of this study, the relationship between basic and everyday cognitive functioning in later life, we note that the literature has offered many different findings. As a consequence of the multiple definitions and approaches to everyday cognitive measurement, the relationship between basic cognitive abilities, as measured by traditional psychometric tests, and everyday cognitive abilities varies substantially from study to study. Theoretically, those authors who have emphasized the experiential, knowledge-based nature of everyday cognitive tasks have drawn from the expertise literature (Charness, 1985; Ericsson & Charness, 1994; Ericsson, Krampe, & Tesch-Römer, 1993; Salthouse, 1991). Using this literature as a basis, experiential theorists have predicted that as individuals age, they develop a rich network of declarative and procedural domain-specific knowledge in areas in which they frequently participate. This specialized knowledge, in turn, decreases the reliance on other basic mental abilities (e.g., inductive reasoning, working memory, speed) for everyday task performance within those domains (P. B. Baltes, 1997; Berg & Sternberg, 1985; Denney, 1989; Rybash & Hoyer, 1994; Salthouse, 1991; Wagner & Sternberg, 1985, 1988).

In contrast, theorists who have viewed everyday cognition as a kind of “compiled” cognition (Salthouse, 1990), which is superordinate to and composed of a set of underlying basic abilities (Berry & Irvine, 1986; Marsiske & Willis, 1998; Willis & Marsiske, 1991; Willis & Schaie, 1986, 1993), have proposed that an amalgam of basic abilities may be responsible for cognitive performance within everyday contexts. These theorists view everyday cognitive tasks as encompassing a broad array of novel challenges (e.g., unexpected court summons) and familiar challenges (e.g., home maintenance) and argue that all abilities may be drawn upon in the solution of a task, particularly when it is novel (Marsiske & Willis, 1998). Weaknesses in basic abilities would predict weaknesses in resulting everyday task performance. As an example, individuals with losses in memory functioning would be expected to have losses in real-world tasks that rely on memory, if other abilities could not compensate.

In general, in recent studies researchers have found a strong link between a variety of basic abilities and everyday cognition. For instance, West, Crook, and Barron (1992) reported that verbal ability, a measure of crystallized ability (Cattell, 1971), was the best predictor of performance on certain tests of everyday memory (i.e., memory for telephone number and news). With regard to the basic abilities associated with the broad fluid domain (Cattell, 1971), Kirasic, Allen, Dobson, and Binder (1996) reported that working memory was the strongest predictor of individual differences on a set of measures assessing declarative learning for information from everyday stimuli (i.e., a bus schedule, a map, and a menu). Similarly, Hartley (1989) reported that a composite representing many facets of memory, including working memory, emerged as the strongest predictor of performance on an insurance policy decision-making problem and a personal advice task. Many studies have also found that performance on measures of everyday cognition is significantly related to tests representing abilities from both the crystallized and fluid domains, with fluid tests explaining larger proportions of variance in most studies (Camp, Doherty, Moody-Thomas, & Denney, 1989; Cornelius & Caspi, 1987; Diehl, Willis, & Schaie, 1995; Margrett, 1999; Staudinger, Lopez, & Baltes, 1997; Willis, Jay, Diehl, & Marsiske, 1992; Willis & Marsiske, 1991; Willis & Schaie, 1986). To date, no researchers in the aging literature have tried to derive a mapping of everyday tasks to specific abilities. Consequently, one important but unaddressed question is whether some subtasks of daily life are particularly dependent on specific subtypes of cognition.

Regarding the second major question in this study, which focuses on the cross-sectional trajectory of everyday cognitive performance in later life, we have also found various answers in the research literature. Given the well-documented preservation of knowledge-based functioning into late adulthood (Cattell, 1971; Horn & Hofer, 1992; Lindenberger & Baltes, 1997; Schaie, 1996), those authors who have emphasized the experiential, knowledge-based nature of everyday cognitive tasks have predicted that performance on these tasks should be less vulnerable to the negative effects of aging. That is, domain-specific knowledge may circumvent the age-related changes in information-processing abilities (Cattell, 1971; Horn & Hofer, 1992; Salthouse, 1991; Schaie, 1996), producing a maintenance of everyday cognitive performance (Salthouse, 1990, 1991). In contrast, other researchers have predicted that age differences in everyday cognition would resemble those found more generally for traditional measures; in other words, the decline found for general intellectual functioning in the later decades of life (Schaie, 1996) would also characterize real-world problem solving. The data, again, yield contradictory patterns. On the one hand, some studies have found no cross-sectional age differences, and even positive developmental trajectories, on certain measures of everyday cognitive performance (P. B. Baltes & Smith, 1990; Colonia-Willner, 1998; Cornelius & Caspi, 1987; Demming & Pressey, 1957; Gardner & Monge, 1977; Marsiske & Willis, 1995). On the other hand, many researchers have documented substantial cross-sectional decline in everyday cognitive ability. In a summary of her research, Denney (1989) stated that across measures, a negative pattern of age-differences emerged in everyday cognitive performance after middle age. Similarly, Willis and colleagues have reported significant late-life cross-sectional age differences (Willis & Schaie, 1986; Diehl et al., 1995; Marsiske & Willis, 1995) and longitudinal mean decline (Willis et al., 1992; Willis & Marsiske, 1991) in everyday cognition. Hartley (1989) showed that the age differences found for everyday cognition varied from measure to measure within a single sample, with performance on two tasks being negatively related to age and a third task being unrelated to age.

The present study attempted to address some of the inconsistencies in this literature and to extend previous investigations by including a multiple-measurement framework (see also Marsiske & Willis, 1995). More specifically, the current research examined the predictive salience of multiple basic ability factors (i.e., Inductive Reasoning, Knowledge, Declarative Memory, and Working Memory) in the performance of important instrumental everyday tasks (i.e., medication use, financial planning, and food preparation/nutrition). The assessment of everyday cognition within such highly familiar, contextually relevant domains also allows for examination of the relationships between everyday task performance and domain-specific knowledge. Because these instrumental tasks have been described as universal, basic, and mandatory (see M. M. Baltes, Mayr, Borchelt, Maas, & Wilms, 1993), an underlying assumption is that most older adults have substantial experience in, and acquired knowledge about, these domains. Our everyday cognitive tasks were drawn from the larger set of instrumental activities of daily living (lADLs; Lawton & Brody, 1969), a set of tasks in which older adults frequently engage in their daily lives (M. M. Baltes, Wahl, & Schmid-Furstoss, 1990; Horgas, Wilms, & Baltes, 1998; Rogers, Meyer, Walker, & Fisk, 1998). These are task domains in which older adults are expected to perform well in order to maintain independent functioning (Baird, Brines, & Stoor, 1992; Diehl, Willis, & Schaie, 1990; Wolinsky, Callahan, Fitzgerald, & Johnson, 1992) and which are highly predictive of institutionalization (e.g., Branch & Jette, 1982; Wolinsky, Callahan, Fitzgerald, & Johnson, 1992) and mortality (e.g., Bernard et al., 1997; Fillenbaum, 1985). Furthermore, through the assessment of cognitive abilities, both with traditional psychometric ability tests and everyday measures, the late-life cross-sectional gradients of both traditional and everyday tasks can be simultaneously examined.

Method

Sample

Participants for this study included 174 (men = 38, women = 136) community-dwelling older adults from the Detroit metropolitan area who were recruited from local senior organizations, including churches and activity centers. Participants ranged from 60 to 92 years of age (SD = 7.38 years, M = 73 years), averaged 13 years of education (SD = 3.06 years, range = 6–23 years), and had a self-reported average household income of $18,246 (range = $2,000–$50,000+). Participants rated their health and sensory functioning, compared with other same-aged individuals, on a 6-point Likert-type scale ranging from 1 (very good) to 6 (very poor). Average ratings on these items indicated that the participants believed their functioning in these domains to be between good and moderately good. Specifically, the mean general physical health rating was 2.30 (SD = 1.01). The mean vision and hearing ratings were 2.60 (SD = .91) and 2.58 (SD = 1.11), respectively.

To obtain a heterogeneous sample for this study, we inclusively sampled ethnic group members in rough accordance with their distributions in the Detroit metropolitan area. In particular, a high proportion of African Americans was included. Consequently, the full sample was 31% (n = 54) African American, 67% (n = 116) White, 1% (n = 1) American Indian/Alaskan Native, 1% (n = 1) Asian/Pacific Islander, and 1% (n = 2) “other.” According to the 1990 Census for the Detroit and Ann Arbor metropolitan area, for persons aged 60 years and older, 81% of residents were White, 18% were African American, and the remaining 1 % included members of all other racial–ethnic groups (U.S. Census Bureau, 1999). Thus, relative to the metropolitan area from which the sample was drawn, the current study oversampled older African Americans, who have been underrepresented in most prior cognitive aging research. Participants received a small honorarium ($15) for their participation.

Measures

The test battery used in this study included traditional psychometric measures of Inductive Reasoning, Knowledge, Declarative Memory, and Working Memory, as well as everyday analogues of the same abilities. These psychometric abilities were chosen for several reasons. First, each of the abilities has been studied in several major investigations of psychometric intellectual aging, including the Seattle Longitudinal Study (e.g., Schaie, 1996), the Adult Development and Enrichment Project (e.g., P. B. Baltes & Willis, 1982), and the Berlin Aging Study (e.g., P. B. Baltes & Mayer, 1999). Second, previous investigations of everyday cognition have particularly emphasized measures of Inductive Reasoning, Knowledge, and Memory as predictors of everyday cognition (Hartley, 1989; Kirasic et al., 1996; Staudinger et al., 1997; Willis et al., 1992; Willis & Marsiske, 1991; Willis & Schaie, 1986). Third, drawing on the broader set of traditional primary mental abilities, the selected abilities lent themselves particularly well to adaptation with real-world stimuli.

The everyday analogues of these ability tests were measured by the Everyday Cognition Battery (ECB), which was designed for this study. The ECB comprises four tests, each designed to assess a single cognitive ability. The stimuli within each test were drawn from three domains of daily functioning (i.e., medication use, financial planning, and food preparation and nutrition) that represent the cognitively advanced activities of daily living (ADLs) or advanced lADLs (Wolinsky & Johnson, 1991), a subset of the lADLs (Lawton & Brody, 1969). Real-world printed material from each domain (e.g., medication labels, credit card or bank statement, and food labels) was used in the construction of test items. The remainder of this section describes the psychometric ability measures and the everyday cognition measures used in this study.

Traditional Psychometric Measures

Letter Sets Test

This test was selected to assess Inductive Reasoning, which is the ability to educe novel relationships in overlearned material (numbers, letters). The Letter Sets Test (Ekstrom, French, Harman, & Derman, 1976) consists of 15 items in which five sets of letters are presented. The participant had to correctly choose from the five sets which set is different from the other four. The number of correct responses served as the score.

Number Series Test

This test was also selected as a measure of Inductive Reasoning. A typical item from the Number Series Test (Thurstone, 1962) present a series of numbers (e.g., 2, 4, 6, 8, _____?), and the participant must choose from five options the next number in the series (e.g., 10). The number of correct responses was summed into a total score.

Verbal Meaning Test

This test was selected as an index of participants' domain-general knowledge and level of acculturation. The Verbal Meaning Test (Thurstone, 1962) requires participants to select the synonym for a given word from a list of five choices. Each item was scored as either correct (1) or incorrect (0). For subsequent latent variable analyses, two Verbal Meaning indicators were created by summing the odd items (Verbal Meaning 1) and summing the even items (Verbal Meaning 2).

Hopkins Verbal Learning Test (HVLT)

This measure was selected to assess Declarative Memory, which is broadly defined as the ability to remember specific episodic facts or propositions. The HVLT (Brandt, 1991) comprises three trials of auditorily presented lists (2-s presentation rate) of 12 words falling in three semantic categories. The same list of 12 words was repeated in each of the three trials (i.e., for a total potential recall of 36 words). After each trial the participants wrote down as many of the words as they could remember in 1 min. The number of words correctly remembered across the three trials was summed to provide a total HVLT score, and the number correct for each trial provided three trial scores. These three trial scores were used as indicators of our basic Declarative Memory factor in our structural models.

Computation span task

Salthouse and Meinz's (1995) computation span task was selected to assess basic Working Memory, which is defined as the ability to process information while simultaneously retaining the same information. The computation span task involved four blocks of two trials, with different numbers of items in each block. Trials involved the auditory presentation of two to five addition or subtraction problems (e.g., 7 + 6), each followed by three alternative answers printed in the test booklet. The participant was instructed to place a check mark next to the correct answer for each arithmetic problem while also remembering the second number from the addition problem (e.g., 6). After selecting the answer to all of the arithmetic problems, participants were instructed to turn the page and recall all the target (second) digits by writing them in the spaces provided. The number of arithmetic problems increased from two to five as participants progressed from Block 1 to Block 4. Trials were scored as correct (1 = all target digits recalled in correct order) or incorrect (0). Two composite-span variables were created. Span 1 was the summed span score for the first trial of all four blocks; Span 2 was the summed span score for the second trial from each block.

Everyday Cognitive Measures

ECB Inductive Reasoning Test (everyday matrices)

The 42-item ECB Inductive Reasoning Test was created to assess participants' ability to identify information patterns in everyday printed materials and to use that information to answer questions within three functional domains. Since traditional measures of Inductive Reasoning require participants to scan a series or set of information (typically words, letters, or numbers) to identify a pattern, the analogous ECB Test consisted of information presented in a naturally occurring matrix format (e.g., Medicare benefits, medication interaction table, and checking account comparison chart).1 Participants used this information to identify the pattern or structure of the information to answer a question (see Figure 1A for an example). This approach was similar to that used by Marsiske and Willis (1995) and Lindenberger, Mayr, and Kliegl (1993). For each of the three functional domains, two stimuli were presented, and each stimulus was then used to answer seven questions (3 domains × 2 stimuli × 7 questions). Each of these questions was devised so that only one response was considered completely correct. Items received a score of 2 if correctly answered or a score of 0 if incorrectly answered. On those items that required a two-part response and only one was provided by the participants, a partial credit score of 1 was awarded.2 Scores were then summed across the test to provide an overall ECB Inductive Reasoning score. In addition, for our latent variable modeling, we also divided the items into three domain subscales (medication use, financial management, and food preparation and nutrition).

Figure 1
A: Sample item from the Everyday Cognition Battery (ECB) Inductive Reasoning Test. B: Sample item from the ECB Knowledge Test. C: Sample item from the ECB Declarative Memory Test. D: Sample item from the ECB Working Memory Test.

ECB Knowledge Test (domain-specific knowledge)

To capture domain-relevant knowledge (in medication use, financial management, and food preparation and nutrition), we constructed the 30-item ECB Knowledge Test as 30 multiple-choice questions (10 items per domain), asking participants for factual knowledge within each domain (e.g., “The expiration date or `use by' date on a product means … ?”). A multiple-choice format was selected for several reasons. First, it is the same format as the traditional psychometric test of Knowledge included in this study, as well as the format used by most psychometric measures. Second, the multiple-choice tests are easily and quickly administered. Third, this format allows for the assessment of Declarative Knowledge in cases when there is only one correct answer, also enhancing the reliability of scoring. The items assessing knowledge within the food preparation and nutrition domain were taken from a nutrition test for people with diabetes (Miller, 1997). Only the items with content not specific to diabetes were retained. The medication use and financial planning items were selected from a large pool of sample items generated in consultation with individuals holding doctorates in nursing and economics, respectively. As with the ECB Reasoning Test, a pilot sample was used to select the final set of items. Each item received either a score of 1 if correctly answered or a score of 0 if incorrectly answered. Scores were then summed across the test to provide an overall ECB Knowledge Test score and within each functional domain (medication use, financial management, and food preparation and nutrition), providing three indicators for subsequent modeling. A sample knowledge item is presented in Figure 1B.

ECB Declarative Memory Test (everyday text recognition)

The 30-item ECB Declarative Memory Test included two real-world stimuli (e.g., medication labels, checking account statement) for each of our three functional domains (medication use, financial management, and food preparation and nutrition). Each of the six stimuli was associated with five items (3 domains × 2 stimuli × 5 questions). Participants were instructed to study a stimulus for 60 s and were then given an additional 60 s to turn the page and answer the five multiple-choice (i.e., recognition) questions about the information in the stimulus. Two sample items are shown in Figure 1C. Each item either received a score of 1 if correctly answered or a score of “0” if incorrectly answered. Scores were then summed across the test to provide an overall ECB Declarative Memory score and within each functional domain, providing three indicators.3

ECB Working Memory Test (inventory tracking span)

The structure and administration of the ECB Working Memory Test was similar to that of the computation span task described above. However, this measure differed from its traditional version because it included a dimension of familiarity or everyday importance in two ways. First, the arithmetic tasks were presented within word problems pertaining to the three functional domains (medication use, financial management, and food preparation and nutrition) in the form, “You have 5 eggs and you buy 3 more. How many eggs are left?” or “You have 5 dollars and you spend 2. How much money do you have now?” Second, during each span's recall portion, the real-world cues were again presented. For the examples above, the cues “eggs left” and “dollars now” would be provided. (Figure 1D presents a two-item span example.) Participants were required to keep track of inventory changes in the domains under study. Consequently, participants had to perform arithmetic and recall problems grounded in real-world examples. The ECB Working Memory Test was administered and scored in the same fashion as the computation span task. The two composite ECB span indicators were created by summing the first trial for each of four blocks (ECB Span1) and the second trial for each of four blocks (ECB Span2).

Demographic questionnaire

Each participant completed a personal data questionnaire. The variables of particular interest included in this questionnaire were age, gender, ethnicity, income, educational level, and self-evaluations of health, hearing, and vision.

Procedure

The measures in this study were administered as part of a larger study. Two testing sessions took place in the facility (i.e., senior center, church) from which the participants were recruited. During the first 1-hr session, participants completed a consent form and also received the demographic questionnaire. Participants returned to the facility within 7 days to complete the second session, which lasted approximately 3.5 hr. During this second session, testing was done in groups ranging in size from 3 to 12 participants. To control for systematic effects associated with practice or fatigue, approximately half the groups received the basic ability measures first, followed by the ECB measures, and the remaining groups received tests in the opposite order.

Each test was given under timed conditions, either with a tester-controlled timer or with an auditory tape with prerecorded time limits. To reduce the effects of individual differences in speed of responding, 24 and 10 min were given to complete the ECB Inductive Reasoning Test and ECB Knowledge Test, respectively. However, participants who did not complete these measures within the time allowed were given the option of taking as much time as they needed to finish during the scheduled breaks or individually after the group testing session was over. For participants requesting it, additional time to complete the ECB Inductive Reasoning Test ranged from 5 to 40 min. For the ECB Knowledge Test, additional time to complete the test ranged from 5 to 10 min. Of the 174 participants, 69% and 91% completed all items on the ECB Inductive Reasoning and ECB Knowledge Tests, respectively; items that were skipped and not attempted were scored as incorrect, as was also the case for the traditional psychometric ability measures in this study.

Results

The results of the current study are presented in three sections. First, the psychometric characteristics of the ECB are summarized, with emphasis on the reliability and range of difficulty of each test, as well as the factor structure of the battery. The remaining two sections focus on answering the two major questions that we have posed in the current study: (a) What is the relationship between traditional measures of psychometric ability and our everyday cognition tests? and (b) Do the cross-sectional age trajectories for each set of measures differ?

Psychometric Characteristics of Everyday Cognition Measures

Examination of the psychometric properties of the ECB was conducted on a subsample of participants for whom no test items were missing (n = 114).4 Estimates of internal consistency, using Cronbach's (1951) alpha coefficient, indicated that each ECB test was composed of a relatively homogenous set of items (ECB Inductive Reasoning Test, α = .88; ECB Knowledge Test, α = .69; ECB Declarative Memory Test, α = .81; ECB Working Memory Test, α = .72). The average percentage correct for three of the four tests indicated that the tests were neither too difficult nor too easy for most participants (ECB Inductive Reasoning Test, 66%; ECB Knowledge Test, 65%; ECB Declarative Memory Test, 71%), whereas average performance on the ECB Working Memory Test was low (an average of 24% of the spans were correctly recalled in order). Note, this is slightly lower than the traditional computation span task, in which 27% of the spans, on average, were recalled in correct order. Table 1 displays the means, standard deviations, and the ranges for all four ECB tests.

Table 1
Means, Standard Deviations, and Range for the ECB Tests

The factor structure of the ECB was evaluated using confirmatory factor analysis (Joreskog & Sörbom, 1993).5 A hypothesized model was specified that included four intercorrelated test-specific dimensions that the ECB tests were designed to assess. Because a confirmatory modeling approach was used, we focused specifically on the four-factor solution. Three of the four factors (ECB Inductive Reasoning, ECB Knowledge, and ECB Declarative Memory) were composed of three domain subscales (medication use, financial management, and food preparation and nutrition). The ECB Working Memory factor was composed of the two composite variables (ECB Span 1 and ECB Span 2). The fit of this model was adequate with a nonsignificant chi-square estimate: χ2(38, N = 114) = 53.38, p > .05, goodness of fit index (GFI) = .92, root mean square error of approximation (RMSEA) = .06, root mean square residual (RMR) = .06, normed fit index (NFT) = .89, nonnormed fit index (NNFI) = .95, relative fit index (RFI) = .85, and incremental fit index (IFI) = .97. Table 2 and Table 3 provide a summary of the standardized parameter estimates for this four-factor solution.

Table 2
Standardized Factor Solution for the ECB: Factor Loading Matrix
Table 3
Standardized Factor Solution for the ECB: Factor Correlations

Basic and ECB Ability Relationships

In this section, we report analyses that examined the relationship between cognitive abilities as assessed by the psychometric and everyday tests of cognitive functioning at both the observed and latent levels. In this and remaining sections, we describe analyses that were performed on the full sample of participants (N = 174). Table 4 displays the correlations between the ECB tests, basic ability tests, a unit-weighted composite of the basic ability tests (g), and a unit-weighted composite of the ECB tests (ECBg). The unit-weighted composites were created as estimates representing global basic cognitive and global everyday cognitive functioning. Table 4 shows that evidence of positive manifold was found among the basic ability measures (rs = .21 to .56) and among the ECB tests (rs = .39 to .75). The correlations between the basic and ECB tests were also high (r = .26 to .74), indicating strong positive relationships among all the cognitive measures.

Table 4
Zero-Order Correlations Between Traditional Basic Ability Tests and ECB Tests

In addition to the correlations between basic and ECB measures, the relationships between the three domain subscales (i.e., medication use, financial management, and food preparation/Nutrition) for the ECB Inductive Reasoning, Knowledge, and Declarative Memory tests were also estimated. The ECB Working Memory test does not include domain-specific subscales and was not included in this analysis. As can be seen in Table 5, strong positive manifold was evident within and across domains. In fact, the average root mean square correlation between the subscales of the same domain was identical to the average root mean square correlation between domains (r = .55).

Table 5
Zero-Order Correlations Between ECB Domain Subscales

Given the positive correlations between all abilities, structural equation modeling was conducted to examine the relative importance of hypothesized specific ability paths while controlling for other ability relationships. That is, in terms of convergent and divergent validity, it was expected that each basic ability factor would emerge as the best unique predictor of the everyday factor of the same ability. Analysis began by randomly splitting the full sample into calibration (n = 87) and cross-validation (n = 87) subsamples. A measurement model was first specified using the calibration subsample. This model included free covariation between the four basic ability factors and the four everyday ability factors. The four basic ability factors were composed of indicators derived from the psychometric measures intended to assess that particular ability (i.e., Letter Sets and Number Series were allowed to load on a common basic Inductive Reasoning factor), and each everyday ability factor was composed as described above. The fit of this measurement model was adequate: χ2(142, N = 87) = 189.45, p < .05 (GFI = .83, RMSEA = .06, RMR = .05, CFI = .96, NFI = .88, NNFI = .95, RFI = .83, IFI = .97).

Next, the factor structure found in the calibration subsample was confirmed using the cross-validation subsample. To do so, the parameters of the calibration and the cross-validation subsamples were estimated simultaneously in a two-group analysis in which factor loadings, factor variances, and factor covariances were all constrained to equality. This complete metric invariance model fit adequately in the two groups: χ2(332, N = 174) = 449.65, p < .05 (GFI = .80, RMSEA = .05, RMR = .11, CFI = .95, NFI = .84, NNFI = .95, RFI = .82, IFI = .95). Subsequent attempts to relax invariance constraints by allowing factor variances and covariances, and then factor loadings, to vary in both groups failed to produce an improvement in overall model fit, thus providing evidence for complete metric invariance (Horn & McArdle, 1992). Given the identity of factor patterns in the two subsamples, the calibration and cross-validation groups were combined, and the measurement model identified above was specified in the full sample.6 Similar to the fit found for the calibration subsample, the fit in the full sample was adequate: χ2(142, N = 174) = 225.17, p < .05 (GFI = .89, RMSEA = .06, RMR = .04, CFI = .97, NFI = .92, NNFI = .95, RFI = .89, IFI = .97). Table 6 shows the standardized loadings for each basic and everyday factor; Table 7 shows the correlations between factors.7

Table 6
Standardized Factor Loadings for Basic and Everyday Factors
Table 7
Basic and Everyday Factor Correlations

A series of structural equation models was then estimated to determine the pattern of relationships between basic and everyday ability factors that would most parsimoniously reproduce the pattern of covariation in the measurement model. The measurement model and all subsequent structural models are summarized in Table 8, which shows a sequence of 14 modeling steps. The transition from step to step occurred when a single parameter estimate was added—on the basis of a modification fit index suggesting its potential significance—or dropped because of a lack of significance.

Table 8
Nested Comparison of Model Fit for Ability–ECB Factor Prediction Model

Model 1 tested the assumption that each everyday ability factor was related to only its constituent basic ability factor (see Figure 2). This ability-specific model provided an adequate fit but failed to reproduce the pattern of relationships in the measurement model. Consequently, additional models (Models 2–11) were specified by adding or deleting additional paths based on the modification fit indexes. As shown by Table 7, Model 11 contained all paths between the basic and everyday factors identified as significant by the modification fit indexes but was still statistically different than the measurement model. Next, a path from everyday Knowledge to everyday Inductive Reasoning was estimated (Model 12), because of a significant modification index. Two additional models were then estimated in which the nonsignificant paths from basic Knowledge to everyday Inductive Reasoning (Model 13) and from basic Declarative Memory to everyday Inductive Reasoning (Model 14) were dropped. The fit of this final model (Model 14) was adequate, χ2(155, N = 174) = 241.34, p > .05 (GFI = .88, RMSEA = .06, RMR = .05, CFI = .96, NFI = .91, NNFI = .96, RFI = .89, IFI = .91), and did not significantly differ from the measurement model. This is important because it indicates that this more parsimonious model adequately reproduced the pattern of covariation between the observed variables as they were represented in the measurement model.

Figure 2
Hypothesized ability-specific model (basic ability factors intercorrelated).

As can be seen in Figure 3, this final model contained a complex pattern between the basic and everyday ability factors. Specifically, significant paths from basic Inductive Reasoning to the everyday factors of Inductive Reasoning, Declarative Memory, and Working Memory were included. Paths from basic Knowledge to the everyday factors of Knowledge and Declarative Memory were also estimated, as were paths from basic Declarative Memory to everyday Declarative Memory and everyday Knowledge. In addition, a nonsignificant path from basic Working Memory to everyday Working Memory was estimated to ensure that each basic ability factor had the opportunity to serve as a direct predictor of its everyday analogue. Deletion of this nonsignificant path would not have adversely affected model fit (p > .05). Furthermore, there was a significant path from everyday Knowledge to everyday Inductive Reasoning, indicating that domain-specific knowledge was related to performance on the ECB Inductive Reasoning Test.

Figure 3
Final basic ability prediction model, Model 14 (basic ability factors intercorrelated).

Relationships With Age

Figure 4 contains scatter plots of the relationship between chronological age and each of the basic and everyday ability tests and their composites (g and ECBg); these relationships were plotted separately for the White and African American subsamples. This separate examination of racial–ethnic subsamples was necessary because initial inspection of the scatter plots revealed that age relationships were suppressed in the full sample, a result of underrepresentation of African Americans at later points in the age distribution (i.e., there was an artificial suppression of age relationships due to the nonequivalent sampling from the age distribution in the White and African American subsamples). Consequently, fitted regression lines are presented separately for the White and African American subsamples; the age gradients were similar to those reported in previous studies (e.g., Schaie, 1996). As the scatter plots clearly indicate, cross-sectional age differences were significant and negative for the psychometric and ECB tests assessing fluid ability (i.e., Inductive Reasoning) and memory (i.e., Declarative Memory and Working Memory) in both ethnic subsamples, with the exception of basic Declarative Memory in the African American subsample. Meanwhile, the measures tapping more crystallized abilities (i.e., Verbal Meaning and everyday Knowledge) remained stable and were not significantly related to age (p > .05) in either the White or African American subsamples. Furthermore, both the composite variables, g and ECBg, were significantly and negatively related to age.

Figure 4
Regression lines for African American (●) and White (○) participants for all cognitive variables and composite scores. Correlations equal to or greater than .26 and .18 are significant (p < .05) for the African American and White ...

To determine whether the magnitude of the age relationships was similar in both groups, we conducted a series of multiple regressions. In this analysis, each test, as well as the ECBg and g composites, was regressed on age, ethnicity, and its product term interaction. Results from this analysis indicated that only ECB Knowledge was differentially related to age in the two groups at a level that reached statistical significance (p < .05). One question that arose in this context was whether the Age × Ethnicity interaction was related to ethnic group differences in income and education, variables that some have argued may be important predictors of individual differences in cognition among older African Americans (e.g., Whitfield et al., 1997). As a consequence, the multiple regression for ECB Knowledge as the dependent variable was reconducted, adding income and education as covariates. In this model, the Age × Ethnicity interaction was only marginally significant (p = .053).

To determine whether performance on traditional and everyday tests was differentially correlated with age, we performed structural equation modeling. Analyses were conducted at the latent level to permit the simultaneous examination of the relationship between age and each cognitive factor that was dissattenuated for measurement error. For these analyses, the full sample (N = 174) of participants was used, with ethnicity as a covariate to correct for the age-suppression effect. Analyses were conducted by specifying two competing models. The first, a constrained model, forced the age correlations of each basic and everyday factor of the same cognitive ability to equality. The second, a relaxed model, was estimated to allow the pairs of age correlations to covary (e.g., basic and everyday Inductive Reasoning were not forced to have identical correlations with age). In both models, the basic and everyday factors were allowed to correlate. The overall fit of the constrained model, χ2(170, N = 170) = 249.57, p > .05, and the relaxed model, χ2(166, N = 170) = 245.87, p > .05, did not statistically differ from one another, χ2diff(4, N = 170) = 3.70, p > .05. Consequently, the age correlations between basic and ECB factors of the same ability were statistically equivalent.

Discussion

The present study examined older adults' cognitive performance by using a new battery of everyday cognitive measures. Two research questions were addressed, using both traditional psychometric ability measures and newly created everyday measures of Inductive Reasoning, domain-specific Knowledge, Declarative Memory, and Working Memory: (a) What is the relationship between traditional measures of psychometric ability and the everyday cognition measures? and (b) Would performance on the everyday measures evince the same age differences as traditional ability measures?

Examination of this new battery's measurement properties revealed that the component measures were reliable and valid assessments of older adults' everyday cognitive ability. In particular, internal consistency reliability estimates for all four tests were adequate to high (i.e., Cronbach's alpha coefficients greater than .70), and the difficulty estimates of three of the four tests were in the acceptable range (i.e., between 30% and 60% of the sample responded correctly to individual items). Furthermore, the structure of the battery was well described by a four-factor solution, corresponding to the hypothesized cognitive abilities under investigation.

However, with regard to our first question, results indicated that whereas the traditional and everyday ability measures were significantly correlated with one another, with evidence of strong positive interrelationships among the two sets of tests, a rather complex pattern of predictive relationships emerged. Specifically, each traditionally measured ability factor, with the exception of Working Memory, was significantly related to multiple everyday ability factors, and domain-specific Knowledge accounted for additional unique variance in everyday Inductive Reasoning. Furthermore, these predictors accounted for at least 75% of the variance in each everyday factor. One question that arises is why Working Memory had lower effects in our present research than in some previous research (Hartley, 1989; Kirasic et al., 1996). One possible explanation stems from the fact that the current study included stronger and clearer markers of fluid ability (i.e., Inductive Reasoning) than did previous studies. Given the multicollinearity of Inductive Reasoning and Working Memory in this study, the fact that we controlled for Inductive Reasoning in our analyses suggests that the unique predictive effects of Working Memory may actually be quite small.

The hypothesized ability-specificity of relationships between traditional and everyday cognitive measures was not obtained. Although this finding might have resulted from difficulties with our measurement of everyday cognition (i.e., the ECB tests were not process pure), it is also a logical outcome of the positive manifold or g phenomenon (Spearman, 1927) or the consistent finding of positive intercorrelations between any set of cognitive measures. This finding also supports the assumptions of some theorists (e.g., Berry & Irvine, 1986; Marsiske & Willis, 1997; Salthouse, 1990; Willis & Marsiske, 1991; Willis & Schaie, 1986, 1993) that everyday cognition is the manifestation of many basic cognitive abilities, representing a kind of “compiled” cognition. These findings can also be viewed within the dedifferentiation (Lienert & Crott, 1964) or neo-integration (P. B. Baltes, Cornelius, Spiro, Nesselroade, & Willis, 1980) perspectives, which posit that the structure of psychometric abilities becomes increasingly unified in old age, as evidenced by an increase in the interrelatedness of psychometric abilities in older adult samples (P. B. Baltes et al., 1980; Marsiske, Lindenberger, & Baltes, 1994). This finding is also consistent with previous psychometric research, which speculates that cognitive abilities, such as Inductive Reasoning and Working Memory, are strongly related to each other across the life span (Kyllonen & Christal, 1990). In this context, it is interesting to speculate whether multiple abilities would have so strongly predicted these everyday cognitive measures in younger samples.

An issue of particular interest, given current theorizing in this research area, is whether domain-specific knowledge can be an important predictor of cognition with everyday stimuli. The present findings do not support the contention that knowledge is the strongest predictor of everyday cognition (P. B. Baltes, 1997; Berg & Sternberg, 1985; Rybash & Hoyer, 1994; Wagner & Steinberg, 1985), even when analyses were conducted at the latent level and were disattenuated for measurement error. Specifically, only our everyday analogue of Inductive Reasoning was uniquely predicted by domain-specific knowledge after controlling for our traditional psychometric abilities (Inductive Reasoning, Knowledge, Declarative Memory, and Working Memory). Interestingly, our basic Inductive Reasoning construct (composed of Letter Sets and Number Series Tests) emerged as the most general basic ability predictor of each everyday cognition construct. This raises the intriguing question of whether reasoning captures an important domain-general set of cognitive competencies that are drawn on in a variety of everyday contexts. In support of this notion, a number of studies have reported that reasoning is a consistent and strong predictor of everyday cognition (Camp et al., 1989; Cornelius & Caspi, 1987; Diehl et al., 1995; Lindenberger, Mayr, & Kliegl, 1993; Willis et al., 1992; Willis & Marsiske, 1991).

In this context, it is important to note that despite the effort to assess domain-specific cognitive competencies in this study, the within-domain correlations were not stronger than the between-domain relationships for the subscales of the ECB Inductive Reasoning, ECB Knowledge, or ECB Declarative Memory Tests. Although this may seem perplexing, several reasons seem likely for the domain-generality observed. First, the domains selected for this study do not represent expert knowledge domains for which a specialized and encapsulated knowledge base would be expected. Instead, the domains were selected precisely because all adults were expected to have a high degree of general familiarity with each domain. Hence, high positive correlations between domains would be expected if level of everyday activity engagement predicts level of domain-relevant knowledge across these highly familiar, practiced domains. Second, from a cognitive resource perspective, it seems plausible that basic cognitive processes might operate with similar efficiency across domains and may serve to yield similar levels of performance across domains; this is consistent with the stability of interindividual differences that is often observed across multiple measures of the same psychometric ability (e.g., Lindenberger et al., 1993; Schaie, 1996). We also note that previous studies of older adults' everyday cognition have not found evidence for domain-based differentiation of performance. Marsiske and Willis (1995), for example, found that strong domain-general factors emerged from several multidimensional everyday cognition measures.

Turning to our second question, we found that the traditional and everyday ability tests were similarly related to age. Specifically, both sets of ability factors could be constrained to have equivalent age correlations without any loss in model fit. Given the strong relationships between traditional psychometric and everyday tests, it is not surprising that the two sets of measures had similar age trajectories. Both traditional and everyday tests of Inductive Reasoning, Declarative Memory, and Working Memory were negatively related to age, whereas domain-specific knowledge and the Verbal Meaning Test were unrelated to age, at least in our White subsample. In other words, even though the ECB tests were designed to assess cognitive abilities within important and familiar functional domains, our obtained cross-sectional trends approximated those of traditional acontextual psychometric ability measures. We could not find evidence with these measures for the general preservation of cognition (i.e., stable cross-sectional trends) with everyday stimuli. The results from this study suggest that perhaps the preservation–vulnerability dichotomy that runs through much of the current everyday problem-solving literature (e.g., P. B. Baltes, 1987; Berg & Klaczynski, 1996; Blanchard-Fields, Jahnke, & Camp, 1995) may be a false one. Instead, perhaps the same complexity that characterizes the aging of basic abilities also characterizes the aging of everyday cognition, at least of the type assessed here. Given the strong predictive relationships between traditional cognitive tests and everyday cognitive tests, it seems tenable that the age-related differences in basic mental abilities contribute to the subsequent age differences in everyday cognition.

Of course, our observed age differences must be considered within the context of the selected measurement approach. That is, the ECB was designed to assess cognitive function in well-structured, adaptively important domains. We did not include the complex social or interpersonal, emotionally salient, or ill-structured problems that have characterized many other studies. Indeed, several theorists have argued that the movement away from formal, purely cognitive tasks of daily living is a hallmark of adaptive adult cognitive development, and it is in such ill-structured tasks that adults may be most likely to show stable or enhanced performance with age (e.g., P. B. Baltes & Staudinger, 1993; Blanchard-Fields, Jahnke, & Camp, 1995; Labouvie-Vief, 1985; Sternberg, 1997). We do not believe, however, that the tasks assessed in this study can be dismissed as irrelevant or unfamiliar.

It is important to underscore the fact that the everyday measures designed for this study do represent the assessment of everyday cognition in domains in which older adults are frequently engaged (M. M. Baltes et al., 1990; Horgas et al., 1998; Rogers et al, 1998) and have been reported to be important predictors of older adults' institutionalization and mortality trends (Baird et al., 1992; Bernard et al., 1997; Branch & Jette, 1982; Fillenbaum, 1985; Wolinsky et al., 1992). The domains and tasks used in our everyday cognitive measures are viewed as common and important, both by seniors and service providers (Diehl et al., 1990).

On a theoretical level, one way in which our everyday cognitive tasks might be interpreted is as measures of older adults' competence, rather than of their daily performance. Specifically, although our problems were based in familiar functional domains (i.e., medication use, financial management, and food preparation and nutrition), the specific tasks presented to older adults represented tasks that they should be able to do, rather than tasks that they necessarily and actually do in daily activities. As an example, being able to compare checking account options (an ECB Inductive Reasoning item) is not a task in which older adults engage daily, but the ability to perform the task when the need arises could have important financial implications (see also Willis, 1991, 1996). By including cognitively challenging tasks within familiar everyday domains, our goal was to design everyday tests consistent with the general conceptualization of everyday cognition as an adaptive tool (e.g., Berg & Klaczynski, 1996; Blanchard-Fields & Chen, 1996; Dixon & Baltes, 1986; Sternberg, 1997).

This approach to assessing everyday cognition is informed by the body of developmental literature on the performance–competence distinction (e.g., Flavell & Wohlwill, 1969; Overton & Newman, 1982; Siegler, 1997; Sophian, 1997) and the related concept of reserve capacity in the adult developmental literature (Kliegl & Baltes, 1987). The key notion is that there may be a difference between the actual performance individuals demonstrate and what they are capable of demonstrating under alternative or optimal assessment conditions. This distinction has potential value when considering the functional status or functional competence of older adults. Although adult developmental theorists have argued that older adults can draw on accumulated knowledge to buffer against performance losses in familiar and overlearned domains (P. B. Baltes, 1997; Berg & Sternberg, 1985; Denney, 1989; Rybash & Hoyer, 1994; Salthouse, 1991; Wagner & Sternberg, 1985,1986), theorists have had less to say about older adults' ability to adapt cognitively to challenges in familiar domains in which they do not have so much experience (e.g., understanding how to take a new medication, adjusting one's tax form completion strategies following retirement).

One intriguing possibility is that everyday cognitive performance remains relatively stable throughout adulthood but that everyday cognitive competence does not. Both P. B. Baltes (1993) and Denney (1989) have addressed this idea in their writings about everyday cognition, suggesting that it is the potential to evince optimal performance, or to activate latent reserve capacity to enhance performance, that is likely to be most susceptible to age-related losses. From this perspective, if the kinds of everyday cognitive challenges presented in this study were viewed as assessments of older adults' everyday competence rather than everyday performance, the cross-sectional trends we obtained would support these theoretical predictions.

Of course, this interpretation of our findings can only be viewed as speculative until two additional lines of inquiry are pursued. First, to adequately contrast everyday cognitive performance with competence, individuals' performance on an ipsatively constructed set of tasks that faithfully reflect personally encountered cognitive challenges must be contrasted with their performance on a more normative, less familiar set of problems, similar in complexity and difficulty. Although attempts to make such contrasts have been reported by some investigators (Berg, Strough, Calderone, Sansone, & Weir, 1998), the key difficulty of such an approach has been that ipsatively defined tasks are difficult to objectively score for correctness or to equate for difficulty within and across individuals. Second, cross-sectional and longitudinal research is required to validate the utility of everyday cognitive competence. Useful measures of older adults' everyday cognitive competence should be predictive, above and beyond what is found with traditional psychometric ability measures, of important adaptive outcomes like independence, institutionalization, and mortality.

Before concluding, we would like to note several caveats that limit the conclusions that can be drawn from this study. First, our sample of older adults was relatively small and encompassed a narrow age range, which served to limit the statistical power to detect differential age-related effects in our two sets of cognitive measures. Furthermore, the sample size could have served to decrease not only our statistical power to discriminate between nested structural equation models but our ability to accurately specify competing models as well. Second, the above conclusions are based on one organization of the observed relationships between variables, which represents only one of the many mathematically possible specifications of the relations among these variables (MacCallum, Wegener, Uchino, & Fabrigar, 1993). Third, the structural models examined in this study were specified using a single sample, which causes the current findings to run the risk of sample specificity, with results influenced by unique characteristics of our participants. This, of course, underscores the need for further replication in independent samples. Fourth, because of the cross-sectional and correlational nature of the study, the developmental conclusions regarding the effects of age on cognitive performance are limited. That is, the relationship between age and performance on both sets of cognitive tests may have been influenced by cohort differences and thus can only be said to reflect age differences, not age-related decline (Schaie, 1993). Longitudinal findings would be needed in order to determine if intra-individual change patterns differ between our traditional and everyday measures. Finally, the age trends obtained in this study were influenced by our attempt to obtain an ethnically heterogeneous sample. Because our African American participants represented an overlapping, but on average younger, population than our White participants, overall age trends for the entire sample (without controlling for ethnicity) were not very accurate. However, little can be said in this study about differential aging in our two major ethnic subgroups because our African American participants were represented in relatively small numbers (about a third of our sample), with inadequate inclusion of participants at the oldest ages. Thus, we have not drawn any conclusions about ethnic-group differences.

The present study is part of a growing body of studies suggesting that, at least on everyday problems reflecting well-structured challenges drawn from the lADLs, strong relationships with traditional psychometric abilities and with age can be found. Future research must demonstrate whether such everyday cognitive tasks add any unique predictive variance, beyond traditional psychometric measures, to understanding the functional trajectories of older adults.

Acknowledgments

This work is largely drawn from the master's thesis of Jason C. Allaire. Partial financial support for this study was provided by the Retirement Research Foundation and Division 20 of the American Psychological Association through a student award and by funds provided by Wayne State University's Vice President for Research. We would like to acknowledge our colleagues Joseph Fitzgerald, Darin Ellis, and Jennifer Margrett, whose comments on earlier versions of this article were invaluable, and Sherry Willis of The Pennsylvania State University, whose earlier collaborations with Michael Marsiske have been influential to the current work. Jennifer Margrett also provided extensive assistance with data collection.

Appendix The Heterogeneity of Everyday Problem Solving Assessment in Adulthood

Measures assessing many cognitive abilities or a general abilityMeasures assessing specific abilities

Everyday problem solvingEveryday memory
 Cornelius & Caspi, 1987  Kirasic, Allen, Dobson, & Binder, 1996
 Marsiske & Willis, 1995  Morrell, Park, & Poon, 1989
Practical problems  West, Crook, & Barron, 1992
 Denney & Pearce, 1989 Everyday information
 Demming & Pressey, 1957
 Gardner & Monge, 1977
Tacit knowledge
 Sternberg & Wagner, 1992
 Colonia-Willner, 1998
Wisdom-related knowledge
 P.B. Baltes & Smith, 1990
Everyday reasoning
 Capon & Kuhn, 1979
Measures assessing many domainsMeasures assessing specific domains

Everyday problem solvingInstrumental activities of daily living
 Berg, Strough, Calderone, Sansone, & Weir, 1998  Marsiske & Willis, 1995
 Blanchard-Fields, Chen, & Norris, 1997  Willis & Marsiske, 1991, 1993
 Cornelius & Caspi, 1987 Interpersonal or social dilemmas
Practical problem solving  Blanchard-Fields, Jahnke, & Camp, 1995
 Denney, 1989  Strough, Berg, & Sansone, 1996
 Denney & Pearce, 1989 Medication use
Ecological problem solving  Morrell, Park, & Poon, 1989
 Hartley, 1989 Financial planning
 Hershey, Walsh, Read, & Chulef, 1990
Consumerism
 Capon, Kuhn, & Carretero, 1989
 Johnson, 1990
Political decision making
 Riggle & Johnson, 1996

Footnotes

1An initially larger pool of stimuli and questions was administered to a small pilot sample, and only items with difficulties ranging from .20 to .80 (i.e., more than 20% and less than 80% of the sample responded correctly to the item) were retained.

2Two raters coded 10% (15) of the protocols. For each of the 42 items, the average agreement was 97% (range = 73–100%), and the average kappa was .98 (range = .63–1.00). Summing across the 42 items, the correlation between scores produced by Rater 1 and Rater 2 was r = .98. The ratings of the two raters did not differ significantly (M of Rater 1 = 56.27; M of Rater 2 = 56.60).

3In our pilot work, we had initially designed this measure as a recall task more analogous to our HVLT-list recall measure. Because of floor effects in both our older and younger pilot participants, we converted the measure into a recognition test.

4Comparison between the complete and incomplete data subsamples indicated that the two subsamples significantly differed in mean education, t(98.4) = 4.91, p > .05, and income levels, t(152.4) = 4.51, p > .05, with the complete data sample having higher mean levels of education (M = 13.94) and income ($20,945) than the full sample (M = 11.56 and income was $13,275). The mean ratings of physical health, hearing, and vision were not statistically different between the two groups.

5All structural equation models in this study were estimated using the LISREL VIII program (Jöreskog & Sörbom, 1993) and were evaluated using overall fit indexes representing estimated goodness of fit (GFI), the root mean square error of approximation (RMSEA)—a fit index indicating the discrepancy between the original and reproduced covariance matrix divided by the degrees of freedom and a fit index for which values of .05 or lower are indicative of adequate fit—and the standardized root mean square residual (RMR; Jöreskog & Sörbom, 1993); comparative fit index (CFI), normed fit index (NFT), nonnormed fit index (NNFI), relative fit index (RFI), and the incremental fit index (IFI) were also examined (Bentler, 1989; Bentler & Bonet, 1980; Bollen, 1989; Marsh, Balla, & McDonald, 1988). In addition to these fit indexes, the chi-square estimate was also examined (Akaike, 1987; Carmines & McIver, 1981). The fit of a particular model with most overall fit indexes above .90 and a chi-square estimate less than twice the degrees of freedom was considered suggestive of adequate fit.

6Two four-factor solutions were also examined separately for the basic ability and ECB tests. The fit of a four-factor solution (Basic Inductive Reasoning, Basic Verbal Knowledge, Basic Declarative Memory, and Basic Working Memory) was adequate for the basic ability tests, χ2(21, N = 174) = 29.09, p > .05 (GFI = .97, RMSEA = .05, RMR = .04, CFI = .99, NFI = .97, NNFI = .99, RFI = .95, IFI = .99), as was a four-factor model (ECB Inductive Reasoning, ECB Knowledge, ECB Declarative Memory, and ECB Working Memory) for the ECB tests, χ2(38, N = 174) = 67.69, p < .05 (GFI = .93, RMSEA = .07, RMR = .04, CFI = .97, NFI = .94, NNFI = .96, RFI = .91, IFI = .97). For both the ECB and the basic abilities tests, we also tried more parsimonious three-, two-, and one-factor models. Each model fit significantly worse than our four-factor model (p < .05). The fit of the three-factor model (Inductive Reasoning and Working Memory, Knowledge, and Declarative Memory) for the ECB tests was χ2(41, N = 174) = 112.50, p < .05, and for the basic abilities was χ2(24, N = 174) = 64.19, p < .05. The fit of the two-factor model (fluid: Inductive Reasoning, Declarative Memory, and Working Memory; Knowledge) for the ECB tests was χ2(43, N = (174) = 150.68, p < .05, and for the basic abilities tests was χ2(26, N = 174) = 272.63, p < .05. Finally, the fit of a single general-factor model for the ECB tests was χ2(44, N = 174) = 174) = 169.14, p < .05, and for the basic abilities tests was χ2(36, N = 174) = 1,077.26, p < .05.

7The pattern of relationships among the basic and everyday factors for the full sample (N = 174) were not statistically different than the relationships among factors in the complete data subsample (n = 114). In a model in which we fixed the factor correlations in the complete data subsample to be identical to those estimated in the full sample, the fit was χ2(170, N = 114) = 217.30, p < .05. In a second model, we relaxed these constraints and freely estimated the correlations in the complete data subsample, and the fit was χ2(142, N = 114) = 190.45, p < .05. The difference between these two models was not significant, χ2 diff (28, N = 114) = 26.85, p > .05.

References

  • Akaike H. Factor analysis and AIC. Psychometrika. 1987;52:317–332.
  • Baird A, Brines B, Stoor E. What tasks are the most important for successful independent living in elders?. annual scientific meeting of the Gerontological Society of America; Washington, DC. Nov, 1992.
  • Baltes MM, Mayr U, Borchelt M, Maas I, Wilms H-U. Everyday competence in old and very old age: An inter-disciplinary perspective. Ageing and Society. 1993;13:657–680.
  • Baltes MM, Wahl H-W, Schmid-Furstoss U. The daily life of elderly Germans: Activity patterns, personal control, and functional health. Journal of Gerontology: Psychological Sciences. 1990;45:173–179. [PubMed]
  • Baltes PB. Theoretical propositions of life-span developmental psychology: On the dynamics between growth and decline. Developmental Psychology. 1987;23:611–626.
  • Baltes PB. The aging mind: Potentials and limits. Gerontologist. 1993;33:580–594. [PubMed]
  • Baltes PB. On the incomplete architecture of human ontogeny: Selection, optimization, and compensation as foundation of developmental theory. American Psychologist. 1997;52:366–380. [PubMed]
  • Baltes PB, Cornelius SW, Spiro A, Nesselroade JR, Willis SL. Integration versus differentiation of fluid/crystallized intelligence in old age. Developmental Psychology. 1980;16:625–635.
  • Baltes PB, Mayer KU, editors. The Berlin aging study: Aging from 70 to 100. Cambridge University Press; New York: 1999.
  • Baltes PB, Smith J. The psychology of wisdom and its ontogenesis. In: Sternberg RJ, editor. Wisdom: Its nature, origins, and development. Cambridge University Press; New York: 1990. pp. 87–120.
  • Baltes PB, Staudinger UM. The search for a psychology of wisdom. Current Directions in Psychological Science. 1993;2:1–6.
  • Baltes PB, Willis SL. Plasticity and enhancement of intellectual functioning in old age: Penn State's Adult Development and Enrichment Project (ADEPT) In: Craik FIM, Trehub SE, editors. Aging and cognition processes. Plenum; New York: 1982. pp. 353–389.
  • Bentler PM. EQS structural equations manual. BMDP Statistical Software; Los Angeles: 1989.
  • Bentler PM, Bonnet DG. Significance tests and goodness of fit in the analyses of covariance structures. Psychological Bulletin. 1980;88:588–606.
  • Berg CA. Knowledge of strategies for dealing with everyday problems from childhood through adolescence. Developmental Psychology. 1989;25:607–618.
  • Berg CA, Klaczynski P. Practical intelligence and problem solving. In: Blanchard-Fields F, Hess TM, editors. Perspectives on cognitive change in adulthood and aging. McGraw-Hill; New York: 1996. pp. 323–357.
  • Berg CA, Sternberg RJ. A triarchic theory of intellectual development during adulthood. Developmental Review. 1985;5:334–370.
  • Berg CA, Strough J, Calderone KS, Sansone C, Weir C. The role of problem definitions in understanding age and context effects on strategies for solving everyday problems. Psychology and Aging. 1998;13:29–44. [PubMed]
  • Bernard SL, Kincade JE, Konrad TR, Arcury TA, Rabiner DJ, Woomert A, DeFriese GH, Ory MG. Predicting mortality from community surveys of older adults: The importance of self-rated functional ability. Journal of Gerontology: Psychological Sciences. 1997;52B:155–163. [PubMed]
  • Berry J, Irvine S. Bricolage: Savages do it daily. In: Sternberg RJ, Wagner R, editors. Practical intelligence. Cambridge University Press; New York: 1986. pp. 236–270.
  • Blanchard-Fields F, Jahnke HC, Camp C. Age differences in problem-solving style: The role of emotional salience. Psychology and Aging. 1995;10:173–180. [PubMed]
  • Blanchard-Fields F, Chen Y. Adaptive cognition and aging. American Behavioral Scientist. 1996;39:231–248.
  • Blanchard-Fields F, Chen Y, Norris L. Everyday problem solving across the adult life span: Influence of domain specificity and cognitive appraisal. Psychology and Aging. 1997;12:684–693. [PubMed]
  • Bollen KA. Structural equations with latent variables. Wiley; New York: 1989.
  • Branch LG, Jette AM. A prospective study of long-term care institutionalization among the aged. American Journal of Public Health. 1982;72:1373–1379. [PubMed]
  • Brandt J. The Hopkins Verbal Learning Test: Development of a new memory test with six equivalent forms. Clinical Neuropsychologist. 1991;5:125–142.
  • Bronfenbrenner U. The ecology of human development. Harvard University Press; Cambridge, MA: 1979.
  • Camp CJ, Doherty K, Moody-Thomas S, Denney NW. Practical problem solving in adults: A comparison of problem types and scoring. In: Sinnott JD, editor. Everyday problem solving: Theory and application. Praeger; New York: 1989. pp. 211–228.
  • Capon N, Kuhn D. Logical reasoning in the supermarket: Adult females' use of a proportional reasoning strategy in an everyday context. Developmental Psychology. 1979;15:450–452.
  • Capon N, Kuhn D, Carretero M. Consumer reasoning. In: Sinnot JD, editor. Everyday problem solving: Theory and applications. Praeger; New York: 1989. pp. 153–174.
  • Carmines EG, Mclver JP. Analyzing models with unobserved variables: Analysis of covariance structures. In: Bohmstedt GW, Bogatta EF, editors. Social measurement: Current issues. Sage; Beverly Hills, CA: 1981. pp. 61–115.
  • Cattell RB. Abilities: Their structure, growth, and action. Houghton Mifflin; Boston: 1971.
  • Ceci SJ, Bronfenbrenner U. On the demise of everyday memory: “The rumors of my death are much exaggerated” (Mark Twain) American Psychologist. 1991;46:27–31.
  • Charness N, editor. Aging and human performance. Wiley; Chichester, England: 1985.
  • Colonia-Willner R. Practical intelligence at work: Relationship between aging and cognitive efficiency among managers in a bank environment. Psychology and Aging. 1998;13:45–57. [PubMed]
  • Conway MA. In defense of everyday memory. American Psychologist. 1991;46:19–26.
  • Cornelius SW. Aging and everyday cognitive abilities. In: Hess TM, editor. Aging and cognition: Knowledge organization and utilization. Amsterdam: North-Holland: 1990. pp. 411–459.
  • Cornelius SW, Caspi A. Everyday problem solving in adulthood and old age. Psychology and Aging. 1987;2:144–153. [PubMed]
  • Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.
  • Demming JA, Pressey SL. Tests “indigenous” to the adult and older years. Journal of Counseling Psychology. 1957;4:144–148.
  • Denney NW. Everyday problem solving: Methodological issues, research findings, and a model. In: Poon LW, Rubin DC, Wilson BA, editors. Everyday cognition in adulthood and late life. Cambridge University Press; New York: 1989. pp. 330–351.
  • Denney NW, Pearce KA. A developmental study of practical problem solving in adults. Psychology and Aging. 1989;4:438–442. [PubMed]
  • Diehl M, Willis SL, Schaie KW. Adults' perceptions about the relevance of printed materials for elders' independent living. annual scientific meeting of the Gerontological Society of America; Boston, MA. Nov, 1990.
  • Diehl M, Willis SL, Schaie KW. Everyday problem solving in older adults: Observational assessment and cognitive correlates. Psychology and Aging. 1995;10:478–491. [PubMed]
  • Dixon RA, Baltes PB. Toward life-span research on the functions and pragmatics of intelligence. In: Sternberg RJ, Wagner RK, editors. Practical intelligence: Nature and origins of competence in the everyday world. Cambridge University Press; New York: 1986. pp. 203–235.
  • Ekstrom RB, French JW, Harman H, Derman D. Kit of factor-referenced cognitive tests. Rev. ed. Educational Testing Service; Princeton, NJ: 1976.
  • Ericsson KA, Charness N. Expert performance: Its structure and acquisition. American Psychologist. 1994;49:725–747.
  • Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychological Review. 1993;100:363–406.
  • Fillenbaum GG. Screening the elderly: A brief instrumental activities of daily living measure. Journal of the American Geriatrics Society. 1985;33:698–706. [PubMed]
  • Flavell JH, Wohlwill JF. Formal and functional aspects of cognitive development. In: Elkind D, Flavell JH, editors. Studies in cognitive development. Oxford University Press; London: 1969. pp. 67–120.
  • Gardner EF, Monge RH. Adult age differences in cognitive abilities and educational background. Experimental Aging Research. 1977;3:337–383. [PubMed]
  • Hartley AA. The cognitive ecology of problem solving. In: Poon LW, Rubin DC, Wilson BA, editors. Everyday cognition in adulthood and late life. Cambridge University Press; New York: 1989. pp. 300–329.
  • Hershey DA, Walsh DA, Read SJ, Chulef AS. The effects of expertise on financial problem solving: Evidence for goal-directed, problem solving scripts. Organizational Behavior and Human Decision Processes. 1990;46:77–101.
  • Horgas AL, Wilms HU, Bakes MM. A description of daily life in very old age: Findings from the Berlin aging study. Wayne State University; 1998. Manuscript submitted for publication.
  • Horn JL, Hofer SM. Major abilities and development in the adult period. In: Sternberg RJ, Berg CA, editors. Intellectual development. Cambridge University Press; New York: 1992. pp. 444–449.
  • Horn JL, McArdle JJ. A practical and theoretical guide to measurement invariance in aging research. Experimental Aging Research. 1992;18:117–144. [PubMed]
  • Johnson MM. Age differences in decision making: A process methodology for examining strategic information processing. Journals of Gerontology. 1990;45:P75–P78. [PubMed]
  • Jöreskog K, Sörbom D. LISREL VIII user's reference guide. Scientific Software; Mooresville, IN: 1993.
  • Kirasic KC, Allen GL, Dobson SH, Binder KS. Aging, cognitive resources, and declarative learning. Psychology and Aging. 1996;11:658–670. [PubMed]
  • Kliegl R, Baltes PB. Theory-guided analysis of mechanisms of development and mechanisms through testing-the-limits and research on expertise. In: Schooler C, Schaie KW, editors. Cognitive functioning and social structure over the life course. Ablex; Norwood, NJ: 1987. pp. 95–119.
  • Kyllonen PC, Christal RE. Reasoning ability is (little more than) working-memory capacity?! Intelligence. 1990;14:389–433.
  • Labouvie-Vief G. Intelligence and cognition. In: Birren IE, Schaie KW, editors. Handbook of the psychology of aging. 2nd ed. Van Nostrand Reinhold; New York: 1985. pp. 500–530.
  • Lawton MP, Brody E. Assessment of older people: Self-maintaining and instrumental activities of daily living. Gerontologist. 1969;9:179–186. [PubMed]
  • Lienert GA, Crott HW. Studies on the factor structure of intelligence in children, adolescents, and adults. Vita Humana. 1964;7:147–163. [PubMed]
  • Lindenberger U, Baltes PB. Intellectual functioning in old and very old age: Cross-sectional results from the Berlin aging study. Psychology and Aging. 1997;12:410–432. [PubMed]
  • Lindenberger U, Mayr U, Kliegl R. Speed and intelligence in old age. Psychology and Aging. 1993;8:207–220. [PubMed]
  • MacCallum RC, Wegener DT, Uchino BN, Fabrigar LR. The problem of equivalent models in applications of covariance structure analysis. Psychological Bulletin. 1993;114:185–199. [PubMed]
  • Margrett JA. Unpublished doctoral dissertation. Wayne State University; Detroit, MI: 1999. Collaborative cognition and aging: A pilot study.
  • Marsh HW, Balla JR, McDonald RP. Goodness-of-fit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin. 1988;103:391–410.
  • Marsiske M, Lindenberger U, Baltes PB. The de-differentiation of intelligence hypothesis revisited: From early adulthood to very old age. Fifth Cognitive Aging Conference; Atlanta, GA. Apr, 1994. 1994.
  • Marsiske M, Willis SL. Dimensionality of everyday problem solving in older adults. Psychology and Aging. 1995;10:269–283. [PMC free article] [PubMed]
  • Marsiske M, Willis SL. Practical creativity in older adults' everyday problem solving: Life-span perspectives. In: Adams-Price CE, editor. Creativity and aging: Theoretical and empirical approaches. Springer; New York: 1998.
  • McClelland DC. Testing for competence rather than for “intelligence.” American Psychologist. 1973;28:1–14. [PubMed]
  • Miller CK. Food and Diabetes Questionnaire. Pennsylvania State University, Perm State Nutrition Center; University Park: 1997.
  • Morrell RW, Park DC, Poon LW. Quality of instructions on prescription drug labels: Effects on memory and comprehension in young and old adults. Gerontologist. 1989;29:345–354. [PubMed]
  • Neisser U. Memory: What are the important questions? In: Gruneberg MM, Morris PE, Sykes RN, editors. Practical aspects of memory. Academic Press; London: 1978. pp. 3–24.
  • Neisser U. A case of misplaced nostalgia. American Psychologist. 1991;46:34–36.
  • Overton WF, Newman JL. Cognitive development: A competence-activation/utilization approach. In: Field TM, Huston A, Quay HC, Troll L, Finely GE, editors. Review of human development. Wiley; New York: 1982. pp. 217–241.
  • Riggle EDB, Johnson MMS. Age differences in political decision making: Strategies for evaluating political candidates. Political Behavior. 1996;18:99–118.
  • Rogers WA, Meyer B, Walker N, Fisk AD. Functional limitations to daily living tasks in the aged: A focus group analysis. Human Factors. 1998;40:111–125. [PubMed]
  • Rybash WJ, Hoyer JM. Characterizing adult cognitive development. Journal of Adult Development. 1994;1:7–12.
  • Salthouse TA. Cognitive competence and expertise in aging. In: Birren JE, Schaie KW, editors. Handbook of the psychology of aging. Academic Press; New York: 1990. pp. 310–319.
  • Salthouse TA. Expertise as the circumvention of human processing limitations. In: Ericsson KA, Smith J, editors. Toward a general theory of expertise. Cambridge University Press; New York: 1991. pp. 286–300.
  • Salthouse TA. Mechanisms of age cognition relations in adulthood. Erlbaum; Hillsdale, NJ: 1992.
  • Salthouse TA, Meinz EJ. Aging, inhibition, working memory, and speed. Journal of Gerontology: Psychological and Social Sciences. 1995;50:P297–P306. [PubMed]
  • Schaie KW. External validity in the assessment of intellectual development in adulthood. Journal of Gerontology. 1978;33:695–701. [PubMed]
  • Schaie KW. The course of adult intellectual development. American Psychologist. 1993;49:304–313. [PubMed]
  • Schaie KW. Intellectual development in adulthood: The Seattle longitudinal study. Cambridge University Press; New York: 1996.
  • Siegler RS. Beyond competence—Toward development. Cognitive Development. 1997;12:323–332.
  • Sinnott JD. Summary: Issues and directions for everyday problem solving research. In: Sinnot JD, editor. Everyday problem solving: Theory and application. Praeger; New York: 1989. pp. 300–306.
  • Sophian C. Beyond competence: The significance of performance for conceptual development. Cognitive Development. 1997;12:281–303.
  • Spearman C. The abilities of man: Their nature and measurement. Macmillan; New York: 1927.
  • Staudinger UM, Lopez DF, Baltes PB. The psychometric location of wisdom-related performance: Intelligence, personality, and more? Personality and Social Psychology Bulletin. 1997;23:1200–1214.
  • Sternberg RJ. GENECES: A framework for intellectual abilities and theories of them. Intelligence. 1986;10:239–250.
  • Sternberg RJ. The concept of intelligence and its role in lifelong learning and success. American Psychologist. 1997;52:1030–1037.
  • Sternberg RJ, Wagner RK. Tacit knowledge: An unspoken key to managerial success. Creativity and Innovation Management. 1992;I:5–13.
  • Strough J, Berg CA, Sansone C. Goals for solving everyday problems across the life span: Age and gender differences in the salience of interpersonal concerns. Developmental Psychology. 1996;32:1106–1115.
  • Thurstone TG. Primary mental ability for Grades 9–12. Rev. ed. Science Research Associates; Chicago, IL: 1962.
  • U.S. Census Bureau 1990 census lookup (STF3C—part 1) 1999. Retrieved February 13, 1998 from the 1990 census summary tape file (STF3) database on the World Wide Web: http://venus.census.gov/cdrom/lookup.
  • Wagner RK. The search for interterrestrial intelligence. In: Sternberg RJ, Wagner RK, editors. Practical intelligence: Nature and origins of competence in the everyday world. Cambridge University Press; New York: 1986. pp. 361–378.
  • Wagner RK, Sternberg RJ. Practical intelligence in real world pursuits: The role of tacit knowledge. Journal of Personality and Social Psychology. 1985;49:436–458.
  • Wagner RK, Sternberg RJ. Tacit knowledge and intelligence in the everyday world. In: Sternberg RJ, Wagner RK, editors. Practical intelligence: Nature and origins of competence in the everyday world. Cambridge University Press; New York: 1988. pp. 51–83.
  • Weisz J. Transcontextual validity in developmental research. Child Development. 1978;49:1–12.
  • West RL, Crook TH, Barron KL. Everyday memory performance across the life span: Effects of age and noncognitive individual differences. Psychology and Aging. 1992;7:72–82. [PubMed]
  • Whitfield KE, Seeman TE, Miles TP, Albert MS, Berkman LF, Blazer DG, Rowe JW. Health indices as predictors of cognition among older African Americans: MacArthur studies of successful aging. Ethnicity and Disease. 1997;7:127–136. [PubMed]
  • Willis SL. Cognition and everyday competence. In: Schaie KW, editor. Annual review of gerontology and geriatrics. Vol. 11. Springer; New York: 1991. pp. 80–109.
  • Willis SL. Everyday cognitive competence in elderly persons: Conceptual issues and empirical findings. Gerontologist. 1996;51:11–17. [PubMed]
  • Willis SL, Jay GM, Diehl M, Marsiske M. Longitudinal change and the prediction of everyday task competence in the elderly. Research on Aging. 1992;14:68–91. [PMC free article] [PubMed]
  • Willis SL, Marsiske M. Life span perspective on practical intelligence. In: Tupper TE, Cicerone KD, editors. The neuropsychology of everyday life: Issues in development and rehabilitation. Kluwer Academic; Boston: 1991. pp. 183–197.
  • Willis SL, Marsiske M. Manual for the Everyday Problems Test. Pennsylvania State University; University Park: 1993.
  • Willis SL, Schaie KW. Practical intelligence in later adulthood. In: Sternberg RJ, Wagner RK, editors. Practical intelligence: Nature and origins of competence in the everyday world. Cambridge University Press; New York: 1986. pp. 236–268.
  • Willis SL, Schaie KW. Everyday cognition: Taxonomic and methodological considerations. In: Puckett JM, Reese HW, editors. Mechanisms of everyday cognition. Erlbaum; Hillsdale, NJ: 1993. pp. 33–54.
  • Wolinsky FD, Callahan CM, Fitzgerald JF, Johnson Falling, health status, and the use of health services by older adults: A prospective study. Medical Care. 1992;30:587–597. [PubMed]
  • Wolinsky FD, Johnson RJ. The use of health services by older adults. Journal of Gerontology: Social Sciences. 1991;46:S345–S357. [PubMed]