|Home | About | Journals | Submit | Contact Us | Français|
Emerging evidence suggests the relationship between health literacy and health outcomes could be explained by cognitive abilities.
To investigate to what degree cognitive skills explain associations between health literacy, performance on common health tasks, and functional health status.
Two face-to-face, structured interviews spaced a week apart with three health literacy assessments and a comprehensive cognitive battery measuring ‘fluid’ abilities necessary to learn and apply new information, and ‘crystallized’ abilities such as background knowledge.
An academic general internal medicine practice and three federally qualified health centers in Chicago, Illinois.
Eight hundred and eighty-two English-speaking adults ages 55 to 74.
Health literacy was measured using the Rapid Estimate of Adult Literacy in Medicine (REALM), Test of Functional Health Literacy in Adults (TOFHLA), and Newest Vital Sign (NVS). Performance on common health tasks were globally assessed and categorized as 1) comprehending print information, 2) recalling spoken information, 3) recalling multimedia information, 4) dosing and organizing medication, and 5) healthcare problem-solving.
Health literacy measures were strongly correlated with fluid and crystallized cognitive abilities (range: r=0.57 to 0.77, all p<0.001). Lower health literacy and weaker fluid and crystallized abilities were associated with poorer performance on healthcare tasks. In multivariable analyses, the association between health literacy and task performance was substantially reduced once fluid and crystallized cognitive abilities were entered into models (without cognitive abilities: β=−28.9, 95 % Confidence Interval (CI) -31.4 to −26.4, p; with cognitive abilities: β=−8.5, 95 % CI −10.9 to −6.0).
Cross-sectional analyses, English-speaking, older adults only.
The most common measures used in health literacy studies are detecting individual differences in cognitive abilities, which may predict one’s capacity to engage in self-care and achieve desirable health outcomes. Future interventions should respond to all of the cognitive demands patients face in managing health, beyond reading and numeracy.
The relationship between adult literacy skills, health knowledge, behaviors, and clinical outcomes has been repeatedly investigated.1–3 More than 500 research publications have demonstrated associations between crude measures of reading and numeracy skills with various health-related outcomes, including risk of hospitalization and mortality.4–6 This has been the foundation for the field now known as ‘health literacy’ research.
Despite more expansive and accepted definitions, the problem of low health literacy has often been characterized as difficulties in reading and math skills. Early studies therefore responded by re-writing health materials at a simpler level or following other design principles to enhance reading comprehension; an approach found to have limited success.7,8 Still lacking a deeper understanding of the problem, recent investigations have tested comprehensive strategies with more promising results.9–11 However, as these were multi-faceted interventions targeting system complexity, it is difficult to isolate the true reason for improvement.
The role of patient requires more than the ability to read and manipulate numbers when managing health and navigating the healthcare system.12 A global set of cognitive skills are necessary to access health services, process and comprehend text and numerical information, orally express oneself, understand and recall spoken instructions, make inferences, utilize technology, critically weigh options and make decisions, and sustain often complex behaviors. Several studies over the past two decades allude to this, as they have reported strong associations between a broader set of cognitive abilities and the aforementioned health outcomes associated with literacy skills.13–17 In general, poorer cognitive performance has been linked to less health knowledge, poorer medication adherence, worse physical and mental health, and greater mortality risk.18–22
A significant relationship is already known to exist between literacy and cognitive skills.23 In fact, there is a small but growing body of literature documenting associations between measures of literacy, health literacy, and limited sets of cognitive abilities.15,24–27 These studies support the premise that the impact of health literacy on outcomes might be explained by a wide array of cognitive domains. It is imperative to explore these links, as this will expand our thinking on the true nature of the problem, and improve our ability to develop more effective strategies for identifying and responding to individuals who will struggle to learn and apply health information.28
Our research team launched a National Institute of Aging study (“Health Literacy and Cognitive Function among Older Adults”; R01 AG030611) herein referred to as ‘LitCog’. We recruited a cohort of older patients in order to investigate the association between health literacy and specific domains of cognitive function. Another objective was to examine to what degree cognitive abilities explained associations with health literacy and performance on health tasks common for self-care. We targeted widely-used assessments of literacy in healthcare: the Rapid Estimate of Adult Literacy in Medicine (REALM),29 Test of Functional Health Literacy in Adults (TOFHLA), 30 and the Newest Vital Sign (NVS).31 The examination of these assessments, coupled with a detailed battery of cognitive tests, afforded the unique opportunity to determine crucial, understudied relationships between literacy skills and general cognitive functioning.
English-speaking adults ages 55 to 74 who received care at an academic general internal medicine ambulatory care clinic or one of four federally qualified health centers in Chicago were recruited starting August 2008 through October 2010. A total of 3176 patients were identified through electronic health records as initially eligible by age, notified of the study by mail, and contacted via phone. A total of 1904 eligible patients were invited to participate. Initial screening deemed 244 subjects as ineligible due to severe cognitive or hearing impairment, limited English proficiency, or not being connected to a clinic physician (defined as<2 visits in two years). In addition, 794 refused, 14 were deceased, and 20 were eligible but had scheduling conflicts. The final sample included 832 participants, for a determined cooperation rate of 51 percent following American Association for Public Opinion Research guidelines.32
Subjects completed two structured interviews, 7–10 days apart, each lasting 2.5 hours. A trained research assistant guided patients through a series of assessments that, on Day 1, included basic demographic information, socioeconomic status, comorbidity, the three health literacy measures, and an assessment of performance on everyday health tasks. On Day 2, patients were administered a cognitive battery to measure processing speed, working memory, inductive reasoning, long-term memory, prospective memory, and verbal ability (see Table 1).33–42 Multiple tests were used for each cognitive domain, allowing a latent trait to be extracted. Northwestern University’s Institutional Review Board approved the study.
Health literacy was assessed by the TOFHLA, REALM, and NVS.29–31 The TOFHLA uses actual materials patients might encounter in healthcare to test reading fluency.29 A reading comprehension section includes 50 items that use the Cloze procedure; every fifth to seventh word in a passage is omitted and four multiple choice options are provided. The numeracy section includes 17 items to assess comprehension of labeled prescription vials, an appointment slip, a chart describing eligibility for financial aid, and an example of results from a medical test. Scores are classified as inadequate (0–59), marginal (60–74), or adequate (75–100).
The REALM is a word-recognition test comprised of 66 health-related words arranged in order of increasing difficulty.30 Patients are asked to read aloud as many words as they can. Scores are based on the total number of words pronounced correctly, with dictionary pronunciation being the scoring standard and interpreted as low (0–44), marginal (45–60), or adequate (61–66). The TOFHLA and REALM are the most common measures of literacy used in healthcare research.43 Finally, The NVS is a screening tool used to determine risk for limited health literacy.31 Patients are given a copy of a nutrition label and asked six questions about how they would interpret and act on the information. Scores are classified as high likelihood (0–1) or possibility (2–3) of limited literacy, and adequate literacy (4–6).
A set of 16 cognitive tests were used to assess six cognitive domains and to derive latent traits for them (Table 1). Cognitive abilities were broadly classified as fluid (processing speed, working memory, inductive reasoning, long-term memory, prospective memory) or crystallized (verbal ability). Fluid abilities refer to cognitive traits associated with active information processing in which prior knowledge is of relatively little help, whereas crystallized abilities embody stored information in long-term memory, or general background knowledge. With the exception of verbal ability, we included tests that were independent of reading skills.
The Comprehensive Health Activities Scale (CHAS) was developed for the LitCog study and its psychometric properties described elsewhere.44 Ten different health scenarios that involve print, video, and spoken health communications as well as common ‘artifacts’ (pill bottles and labels) requiring navigation are presented to patients, followed by a series of questions asking them to demonstrate comprehension and/or use the material and artifacts. This methodology was adapted from prior studies by the research team, cognitive psychology approaches, and national literacy and health literacy assessments.45–47 In brief, an initial pool of 150 items was developed across the scenarios and 80 items kept in the final assessment; all loaded on one common latent variable resulting in 94 % reliable variance (Ω total).48,49 Scores were standardized (0–100), and item subscales created, including: 1) comprehending print information (9 items), 2) recalling spoken information (11 items), 3) recalling multimedia information (20 items), 4) organizing and dosing medication (18 items), and 5) healthcare problem-solving (19 items). Higher scores translate to greater performance on specified health tasks. Internal consistency was high for all categories (Cronbach’s α=0.73, 0.63, 0.78, 0.76, 0.76, respectively).
Descriptive statistics were calculated for each variable. ANOVA was used to compare mean performance on health tasks and functional health status by health literacy categories. Pearson product–moment (TOFHLA, REALM) and Spearman (NVS) correlations were used to examine associations between health literacy measures and cognitive tests.
Domain-specific and general cognitive ability (crystallized, fluid) scores were created to reduce cognitive skills to one measure per category and to avoid multicollinearity in subsequent regression models. Univariate imputation sampling methods were used to estimate any missing values (n=98) on cognitive measures by regressing each variable on age and variables from the same cognitive category in a bootstrapped sample of non-missing observations. The category-specific and domain summary scores were then calculated by estimating a single factor score with maximum likelihood estimation.
Multivariable linear regression models were conducted to examine the independent associations between health literacy, fluid or crystallized cognitive abilities with overall performance on health tasks. Age, gender, race, and number of comorbid chronic conditions were included in all models as covariates. A final model included all three variables, and the extent to which the effect of health literacy was attenuated by cognitive abilities was then evaluated. The Vuong test, a likelihood-ratio based approach for non-nested models, was used to determine whether the variance explained by models (R2) significantly changed when health literacy, fluid or crystallized abilities were included or omitted.50 Analyses were performed using STATA 11.2 (College Station, TX).
The sample is described in Table 2. Participants were socially and economically diverse by years of schooling, household income, employment, marital status, and living situation. Individuals on average had two chronic conditions (M=1.9, SD=1.4), and were taking a mean of 3.6 prescription medications (SD=3.1). According to the TOFHLA, estimates of marginal and inadequate health literacy were 16.8 % and 12.5 %, respectively. This compared with 15.4 % and 8.9 % as determined by the REALM, and 22.9 % and 28.9 % according to the NVS. Across all measures, lower health literacy was associated with older age, African-American race, less education, and lower household income.
Correlations among the three health literacy measures were 0.76 (TOFHLA- REALM), 0.62 (TOFHLA-NVS), and 0.47 (NVS-REALM; all p<0.001). Health literacy measures were strongly correlated with all cognitive abilities (Table 3). Fluid abilities were more strongly correlated with the TOFHLA and NVS than with the REALM (0.76 and 0.73 vs. 0.57, respectively), and crystallized abilities correlated similarly with all health literacy measures (TOFHLA - 0.77, REALM - 0.74, NVS - 0.71). Fluid and crystallized abilities were also strongly correlated with one another (r=0.78).
In bivariate analyses, inadequate health literacy was significantly correlated with worse performance on healthcare tasks, with a gradient decline across decreasing levels of literacy skills (TOFHLA – r=0.81, REALM – r=0.68, NVS – r=.74; all p<0.001, Table 4). Associations between fluid and crystallized cognitive abilities with overall task performance were equally strong (r=0.84 for both sets of abilities; p<0.001).
In multivariable models, inadequate health literacy as measured by the TOFHLA, REALM, and NVS were significant independent predictors of worse overall task performance (Table 5). Similarly, both fluid and crystallized cognitive abilities were significantly associated with the outcome. When both were entered into models that included health literacy, the relationship between health literacy and task performance was attenuated by 70.6 % for the TOFHLA (without cognitive abilities: β=−28.9, 95 % Confidence Interval (CI) -31.4 to −26.4; with cognitive abilities: β=−8.5, 95 % CI −10.9 to −6.0). This reduction was similar for the REALM (77.7 % attenuation; without cognitive abilities: β=−27.8, 95 % CI −30.8 to −24.7; with cognitive abilities: β=−6.2, 95 % CI −9.0 to −3.4) and NVS (73.4 % attenuation; without cognitive abilities: β=−22.8, 95 % CI −24.9 to −20.7; with cognitive abilities: β=−6.0, 95 % CI −7.9 to −4.1).
Independent associations between fluid and crystallized abilities and task performance were reduced by approximately half when they were both included together in the models, and health literacy by any measure provided little to no further attenuation of these relationships when added to models. The variance explained by multivariable models was significantly greater with fluid and/or crystallized cognitive abilities compared to health literacy as measured by the TOFHLA (R2=0.73 and 0.74 vs. R2=0.66, both p<0.001; combined R2=0.80, p<0.001). However, the explanatory power of the model including health literacy, fluid and crystallized abilities (R2=0.82) was greatest compared to models including only health literacy (p<0.001) or both fluid and crystallized abilities (p=0.003). This was also true for the REALM and NVS, with models explaining 60 % and 62 % of the variance, respectively. Including fluid and/or crystallized abilities significantly increased the variance explained by models (all p<0.001, Table 5). When health literacy (measured by REALM or NVS), fluid and crystallized abilities were all included in models (REALM: R2=0.81; NVS: R2=0.81), the variance explained was greater than models only including health literacy (both p<0.001) or fluid and crystallized abilities (p=0.01).
Health literacy has been the subject of multiple reports, and the U.S. Department of Health and Human Services, Institute of Medicine, and World Health Organization promote and exhort improving health literacy as a public health goal.1,51,52 Proposals have even been made recently to recognize low health literacy as a risk factor warranting clinical screening.53–55 Despite this, controversy remains as to the definition of health literacy - whether it is an individual risk factor or asset, a reflection on healthcare providers’ skills and health systems’ accessibility, or all of these.56 Clearly, the term has sparked unprecedented interest around simplifying healthcare and helping individuals manage their health.
The intent of the LitCog study was to revisit the measures previously used, almost exclusively, in health literacy research and better understand the latent psychological traits being evaluated. Our findings strongly suggest that the problem of limited health literacy mostly reflects individual differences across a broad set of cognitive skills that include but are not limited to reading and numeracy. Associations between health literacy and performance on common health tasks were substantially explained by 1) fluid abilities necessary to actively learn and apply new information, and 2) crystallized abilities such as background knowledge.
The TOFHLA and NVS were more strongly correlated with one another and aligned with fluid abilities. REALM associations with the various health tasks were explained more by crystallized abilities. In multivariable models, assessments of fluid and crystallized abilities together with a health literacy test best explained performance on everyday health tasks. There is likely not a limited set of skills that can be isolated as most important in managing one’s health. The roles individuals assume in healthcare require reading and numeracy, but also health-related knowledge, speed and efficiency of thought, critical thinking, multi-tasking, and memory among other abilities. It is therefore not surprising cognitive traits explained such a large degree of the relationship between health literacy and task performance, or that health literacy measures also provided an independent contribution.
Our findings are limited in that our sample was English-speaking only and predominantly female. We also included more assessments of fluid abilities than crystallized tests. Since fluid and crystallized abilities were comparable in explaining health literacy associations, our findings might under-estimate the importance of background knowledge. In addition, performance on everyday health tasks was measured using hypothetical scenarios. Participants might have applied greater effort to tasks if they had been more salient to their current personal health. However, when comparing scores between those with and without experience with the task or condition in each scenario, differences were not found. LitCog participants are now being followed as a cohort, allowing for opportunities to prospectively study relationships between health literacy, cognitive abilities, and outcomes including risk of hospitalization and mortality.
A general critique of our findings might be that the assessment of task performance is similar to health literacy measures. However, we required individuals to demonstrate functional skills across a wide array of health scenarios beyond solely reviewing print materials - the basis of existing health literacy tests. This criticism can also be directed at many seminal health literacy studies that have examined associations with the ability to perform common self-management tasks.4,57–59 In fact, assessments of health literacy closely resemble cognitive tests, supporting the primary assertion of the LitCog study. The most notable similarity can be seen between the REALM and the American-National Adult Reading Test (AM-NART); both require individuals to correctly pronounce lists of words (r=0.73). The strength of correlations among health literacy, cognitive tests, and performance on health tasks should be understandable and expected.
These crude literacy assessments have proven to be useful research tools, and it is possible they may eventually demonstrate equivalent clinical utility. All are highly predictive of an individual’s ability to perform routine healthcare tasks; the choice to use one versus another should depend more on test attributes (i.e. availability in Spanish, time to administer) and less so out of concerns for misclassification. In this case, the NVS might be a logical choice as it is as brief as the REALM, but available in Spanish where the REALM is not.
Brief measures that assess global cognitive function, such as the mini-mental status exam (MMSE), might also serve as proxies for health literacy. Strong associations between health literacy measures, such as the TOFHLA, and the MMSE have previously been established.25,26,60 The advantages to this approach are that many of these cognitive screeners are already administered in clinical settings. In addition, their face validity makes it less likely a researcher or clinician would fall victim to superficial interpretations of the problem and its solution. Yet these tools would likely need revised scoring thresholds, as individuals could be free of a clinically defined cognitive impairment yet still have limited health literacy.
The proposition that the most common measures of health literacy are actually crude assessments of general cognitive abilities should not distract attention from broader efforts to redesign health materials, improve clinician communication skills, enhance the navigability of health systems, or engage communities to assume public health roles.27 Our findings affirm the need to help patients build appropriate background knowledge and skills, but to also reduce the cognitive demands of health systems through unnecessarily complex health tasks. As a start, health literacy interventions should move beyond plain language approaches and deconstruct the tasks required of patients within a particular healthcare context. Depending on the task, steps could be taken that follow cognitive and human factors principles to improve performance. This might include giving individuals more time to accurately process information, limiting and layering new information to reduce cognitive burden, use of increasingly available technologies or ‘external aids’ (i.e. pill box organizers) to enhance recall and prompt health behaviors, or eliminate tasks all together if the health system could assume responsibility instead.
Future evaluations of interventions should always collect sufficient data to determine if a strategy mitigates the impact of ‘low health literacy’ on outcomes.61 A modified perspective of health literacy that includes an expansive view of cognitive skills necessary to manage health could also inform the development of more precise clinical assessments to identify those at risk.12 Despite calls for clinicians to follow universal precautions and assume all patients may have health literacy concerns, remediating inadequate cognitive skills for self-care might require clinical screening. This would then allow a greater allocation of resources, in terms of education and follow-up, to those struggling to learn and apply health information and instructions.
We would like to thank Elizabeth Bojarski, Rachel O’Conor, Emily Ross, and Rina Sobel for their determination and effort in recruiting and collecting data for the LitCog Study. This project was supported by the National Institute on Aging (R01 AG030611; PI: Wolf)
The authors declare that they do not have a conflict of interest.
This project was supported by the National Institute on Aging (R01 AG030611; PI: Wolf)