The purpose of the present study was to empirically investigate the relationships between two theoretically derived approaches to the measurement of everyday cognition in older adults. In particular, the study focused on examining three research questions, using more and less well-defined measures of everyday cognition. First, findings from this study suggest that all of our measures of well- and ill-defined everyday cognition are, to some extent, interrelated. A single exception to this general pattern was the relationship between the ill-defined solution quality score and the well-defined ECB Declarative Memory test, which failed to reach significance. Second, the well- and ill-defined measures were differentially related to traditional psychometric cognitive ability tests. Specifically, all of the well-defined ECB factors related strongly and significantly to basic ability factors representing Inductive Reasoning, Verbal Knowledge, and Declarative Memory. The ill-defined factors were not related to the basic cognitive abilities, with the exception of the Solution Fluency factor, which showed a unique relationship with the Basic Knowledge factor. Third, we addressed the question of how well- and ill-defined everyday problems related to everyday functional competence. In addition, we examined the extent to which individual differences in everyday functioning that were related to basic cognitive ability could be mediated by everyday cognition. The results revealed that the Well-Defined ECB factor and the Ill-Defined Solution Quality factor were significant predictors of the everyday functioning composite, explaining more than half of its reliable variance. Follow-up analyses also showed that the everyday cognition factors accounted for most of the basic ability variance in everyday functioning, as well as substantial unique variance beyond that explained by basic abilities. Thus, the key findings from this study were that well- and ill-defined problem-solving factors, as assessed in this study, are not orthogonal and together accounted for substantial and unique variance in a commonly used real-world outcome measure.
In this study, we constrained the social content of the problems included in both the well- and ill-defined measures. In published practice, though well-defined measures have traditionally focused on everyday instrumental domains (e.g., Allaire & Marsiske, 1999
; Willis & Marsiske, 1991
; Willis & Schaie, 1986
), ill-defined measures typically have dealt with affective or social contextual dilemmas encountered in old age (e.g., P. B. Baltes & Staudinger, 1993
; Berg et al., 1998
; Blanchard-Fields et al., 1995
; Strough et al., 1996
). Even in the domains considered in this study, social context can play an important role. For instance, Margrett (1999)
found that older adults considered the activity of preparing meals with others to be a much more social domain than other IADLs (e.g., financial management or medication use). Despite the importance of social–emotional content in everyday problem solving, the problems included in this study were presented without any contextual information regarding the physical location of the problems, the social partners who might be involved, or the interpersonal factors that might guide problem solving. This exclusion of social–emotional problems was a decision in the interest of internal validity, to remove as much of the individual differences in participant preferences and motivations from problem solutions as possible. This tight focus on problems excluding social-emotional content permitted us to more clearly compare the effects of the well- and ill-defined problem presentations, without other intervening factors differentiating the two classes of problems.
Future work should seek to determine whether adding back the social–emotional content into everyday problems would aid in accounting for individual differences in everyday functioning. On the one hand, because typical measures of everyday functioning (e.g., Lawton & Brody, 1969
) often are themselves fairly well structured and context invariant representing basic competence (M. M. Baltes, Mayr, Borchelt, Maas, & Wilms, 1993
; Marsiske, Klumb, & Baltes, 1997
), one might argue that measures that capture affective and social context of everyday life would add little to predicting such outcomes. On the other hand, to the extent that ill-defined measures also capture the preferences and motivations that guide real-world decision making, such measures might actually enhance predictions. We take our additional predictive benefit by adding ill-defined solution quality to our models as preliminary support for this notion that including ill-defined measures might enhance predictions. Moreover, if everyday functioning were more broadly defined to include affective and social outcomes, including well-being, it could be that the predictive benefit of ill-defined measures would become even stronger. Indeed, the relative narrowness of our everyday competence outcome is a limitation of the present study. Our self-rating of everyday competence was broader than in many other studies, because it also included items pertaining to cognitively complex activities such as keeping appointments, keeping track of current events, and understanding media (Pfeffer et al., 1982
In addition to the social–emotional content, we also constrained the everyday domains examined in this study to food preparation, financial management, and medicine use. As explained in Allaire and Marsiske (1999)
, this decision was governed, in part, by Wolinsky and Johnson’s (1991)
finding that such “cognitive activities of daily living” domains might be particularly predictive of functional outcomes like institutionalization and mortality. We did not empirically validate these domains (e.g., factor analyze the scales to confirm cross-measure domain correlations or have judges rate problems to assign them to domains). Rather, in this study, domains were used as a selection criterion to ensure the “everyday-ness” or face validity of the problems used. They were also used to try to ensure that differences between well- and ill-defined problems could not be attributed to different content domains being tested across problem types. It was not our intent to compare functioning across everyday domains. Indeed, several studies have shown (Allaire, 1998
; Marsiske & Willis, 1995
) that when measures of everyday cognition include multiple domains, a single general latent factor is usually the best representation of between-domain functioning, which suggests that the underlying cognitive processes are fairly similar across domains.
Our chief intent in this study was to contrast well- and ill-defined everyday problems. To build on previous research findings, we therefore used existing measurement approaches (Allaire & Marsiske, 1999
; Denney & Pearce, 1989
; Marsiske & Willis, 1995
). In choosing to draw on these previous approaches, our well-and ill-defined problems differed from each other in two major ways. First, our well-defined problems provided all the information (i.e., initial state, solution means, and end state) needed to solve the problem, whereas our ill-defined problems typically provided or implied an initial state and a goal (e.g., “Your doctor tells you to eat foods low in fat. What do you do?”) but provided no solution means or other information to help solve the problem. Second, our well-defined problems required a single correct answer, whereas our ill-defined problems allowed for multiple correct solutions. From problem-space theory, then, the chief way in which the two classes of problems differed was in terms of the provision of solution means. It must be acknowledged that in using the Denney approach (Denney & Pearce, 1989
), our ill-defined problems were not as ill-defined as they might have been if, for example, problem end states had also not been provided. Therefore, it may be theoretically and pragmatically appropriate to view everyday problems as falling along a continuum of definedness. A problem’s exact location along this continuum (e.g., Berg, 1999
) is determined by the extent to which the initial state, solution means, and end state of a particular everyday problem is provided or defined.
From these two approaches to everyday problem solving, both used in the research literature but seldom together, we found that both approaches uniquely and significantly predicted real-world outcomes above and beyond psychometric measures of cognition. Establishing the everyday cognition measures as strong predictors of older adults’ everyday competence is an important step in assessing the ecological validity of everyday cognition measures. If everyday problem-solving measures only accounted for the same variance in older adults’ everyday functioning as extant cognitive measures, there would be little justification for adding such measures to batteries containing traditional, well-understood psychometric measures of cognitive functioning. Indeed, our findings clearly suggested that our well- and ill-defined measures, separately and together, explained substantial variance in self-rated everyday functioning beyond traditional cognitive measures. The caveat to this interpretation of our findings is that a broader ability battery might have accounted for more variance in everyday functioning; our current battery included only three cognitive abilities (i.e., inductive reasoning, declarative memory, and knowledge). However, our findings lend preliminary support to the notion that there is “value added” in using everyday cognition measures to predict real-world outcomes. The predictive ability of the well-defined ECB tests and the solution quality score highlights our belief that effective everyday problem solving might be defined in terms of its ability to predict important real-world outcomes. That is, we define effective in terms of positive prediction of a desirable criterion or outcome, and in the context of the present study, higher performance on the ECB tests and higher ratings of solution quality were related to better self-rated everyday functioning.
Of particular interest was our finding that the conclusions drawn from ill-defined problems are critically dependent on scoring approach. The two scoring strategies used with the ill-defined measure yielded different and independent results. In particular, the solution fluency score was significantly related to basic knowledge, whereas solution quality remained unrelated to basic abilities. Moreover, unlike fluency, solution quality served as a unique predictor of older adults’ self-reported everyday functioning in predictive models that also included well-defined models. In some ways, these findings call into question the use of fluency-based scoring approaches. Although fluency has the advantage of ease and replicability, it offers little unique variance beyond more conventional well-defined measures. Moreover, fluency is conceptually problematic as an indicator of good problem solving. Although one might argue that the more ideas one has available, the more likely one is able to select good solutions, a counter possibility could be that expert everyday cognition is characterized by efficiency and the tendency to filter out less-optimal verbosity and inefficiency (e.g., Berg et al., 1999
) so that more solutions are not necessarily better.
We acknowledge that our study still leaves important unanswered questions about what strategies or solution types are particularly useful in contributing to positive real-world functioning. Indeed, by using unstructured expert judgments of solution quality, which have the benefit of capturing some of the real-world processes by which everyday competency judgments are made, we gave up the ability to precisely identify the dimensions along which quality judgments were constructed. Speculatively, given that the solution quality score emerged from expert judgments from participants’ written solutions, it is likely that elements of communicative efficiency and person perception (by the raters) may have played a role in those judgments, as might creativity in the generation of unusual and original solutions.
This interpretation is certainly consistent with recent work on wisdom. In a review of their theoretical and empirical work on the topic of wisdom, P. B. Baltes and Staudinger (2000)
argued that older adults’ wisdom-related performance reflects the integration of intellectual and personality characteristics. Evidence of this integration is provided by a recent study (Staudinger et al., 1997
) in which traditional measures of intelligence uniquely explained very little of the individual differences in older adults’ wisdom-related performance. Instead, measures capturing the intersection of personality and intelligence (e.g., creativity, cognitive style, and social intelligence) accounted for the largest proportion of the explained variance in performance on their wisdom task. Extending this line of argument to the present work, it may be the open-ended, ill-defined problems are also at the interface of cognition and personality, although in the present study we lacked the noncognitive covariate measures needed to test this assertion. Clearly, a critical next step in this literature is to integrate research on the processes, strategies, or dimensions (e.g., Watson & Blanchard-Fields, 1999
) of effective problem solving with reliable protocol-based solution quality ratings (expanding on the initial effort in this study) and linking these underlying aspects directly to meaningful real-world outcomes.
Taken as a whole, the results from this study suggest that the current tendency in the research literature to use single measures of everyday problem solving, selected either to reflect a well-defined or ill-defined approach to assessing everyday cognition, may not be the most fruitful. Our findings suggest that the well- and ill-defined measurement approaches are distinct but related, and both may be important in predicting older adults’ everyday functional competence, above and beyond the more context-free measures of cognition typically included in the adult development literature. Expressed differently, these findings can be used to further argue that everyday cognition should not be considered unitary but instead as a multidimensional construct (e.g., Marsiske & Willis, 1995
). Thus, the study of everyday cognition will further benefit if a multiple-measurement framework, including both well-and ill-defined measures, is incorporated into future studies.