|Home | About | Journals | Submit | Contact Us | Français|
Past research suggests that semantic and numerical medical risk descriptors may lead to miscommunication and misinterpretation of risk. However, little research has been conducted on systematic features of this bias, and the resulting potential risks to people contemplating or receiving treatment. Three studies explore the influence of verbal versus numerical medical risk descriptions. In Study 1a, San Francisco Bay area residents (N = 59) were presented with semantic descriptors for low-likelihood events and reported their perceived quantitative risk for the events. In Study 1b, undergraduates (N = 29) were presented with semantic versus numerical information about side effects for a prescribed medication and reported their perceived risk and adherence intentions. In Study 1c, San Francisco Bay area residents (N = 125) were presented with semantic versus numerical information about their risk for a disease and reported their perceived risk and intention to adhere to a prescribed treatment. The results of the first study suggest that people systematically overestimate the likelihood of low probability events described in semantic terms such as “low risk” or “people may occasionally experience.” The results of the second and third experiment suggest that presenting semantic information about the risks of engaging in a new behavior makes people less likely to engage in that behavior, whereas presenting semantic information about the risks of not engaging in a new behavior makes people more likely to engage in the behavior. The decision to present semantic versus probabilistic information is tantamount to a decision about whether to encourage risk acceptance versus risk avoidance.
During the 2003 SARS epidemic, news reports flooded the airwaves in an attempt to assuage people's fears about contracting SARS on an airplane. One broadcast reported “infinitesimal” risks associated with SARS and flying (National Broadcasting Company, 2003). Another stated that people should not worry about contracting SARS because the risk is “low” (Centers for Disease Control and Prevention, 2005a). These reports were most likely trying to communicate that there were only 16 known cases of people contracting SARS on a plane, and these cases occurred before health screening procedures began (National Broadcasting Company, 2003). The public, however, appeared to react to such descriptors with alarm rather than reassurance. Evidence of significant reductions in flying suggests that people substantially overestimated their risk of contracting SARS from a fellow airline passenger (Centers for Disease Control and Prevention, 2005a). Why did this risk communication, which had been intended to reassure rather than alarm potential flyers, backfire so badly?
This article focuses on the systematic bias that results from using semantic risk labels for low-probability events. We propose that people presented with semantic risk descriptors will misperceive the intended message and tend to overestimate their risk for low-probability events. This bias can discourage adherence to a recommended course of action when the risks of engaging in a new behavior are presented (e.g., taking a prescribed medication or undergoing an invasive test or exploratory surgery). However, the same bias can encourage adherence to a prescribed regimen, when the risks of failing to engage in a new behavior are presented (e.g., information about the risks of not changing one's lifestyle to avoid heart disease).
Many government agencies require that risk information be provided to improve joint decision-making between information providers and decision-makers (Food and Drug Administration, 2004). Yet, as in the above case with SARS, informing people about risks may have unintended consequences. For example, the mere fact that information is presented could lead people to assume that a behavior involves substantial risk, because of pragmatic conversational norms (Grice, 1975) (i.e., if it were unlikely, then the communicator wouldn't waste my time by telling me about it). If information to counter this perception is not properly presented, people might misinterpret the intended meaning of the information.
In addition to the pragmatics of presenting risk information, psychological research has demonstrated that methods of conveying risk affect risk perception, decision-making, and subsequent behavioral responses (Bryant & Norman, 1980; Cohn, Schydlower, Foley, & Copeland, 1995; Nakao & Axelrod, 1983; Olson & Budescu, 1997; Redelmeier, Koehler, Liberman, & Tversky, 1995; Tavana, Kennedy, & Mohebbi, 1997; Wallsten, 1990). For instance, verbal (i.e., semantic) and quantitative communication methods affect risk perceptions differently (Berry, Michas, & Bersellini, 2003; Edwards, 2004; Edwards, Elwyn, Covey, Matthews, & Pill, 2001; Epstein, Alper, & Quill, 2004; Kahneman & Tversky, 1982; Schechtman, 2002), leading to confusion over the preferred presentation format (Marteau et al. 2000; Marteau, Senior, & Sasieni, 2001; Windschitl & Weber, 1999).
Some studies suggest that numerically expressed risk is often viewed as the preferred communication format because it may lead to a greater understanding of information (Brase, 2002; Brase, Cosmides, & Tooby, 1998; Cosmides & Tooby, 1996; Gigerenzer, 1998). However, other studies suggest that people have trouble understanding numerically expressed risk information because it can require rigorous mathematical skills (Schwartz, Woloshin, Black, & Welch, 1997; Wallsten, 1990), and almost half of North Americans lack the minimum skills needed to process such information (International Adult Literacy Survey, 2000).
Semantically presented information also has benefits and drawbacks (Budescu, Weinberg, & Wallsten, 1988; Burkell, 2004; Wallsten, Budescu, Zwick, & Kemp, 1993). For example, verbal labels are widely used for communicating risk (Erev & Cohen, 1990; Merz, Druzdzel, & Mazur, 1991) because they are easier to use, and more natural than numerical risk descriptors (Wallsten et al., 1993). However, conveying information in this vague manner may lead to miscommunication and misinterpretation. Although the information provider intends for the semantic descriptor to indicate one particular value, the receiver of the information may have a different numerical translation. Using semantic descriptors may therefore hinder communication because people can have vastly different interpretations of the meaning of the risk descriptor.
This article builds on past research on risk communication and predicts that people are systematically biased and tend to overestimate the intended meaning of verbal descriptors used for low probability medical risk events. As a result people may be overly fearful of low-likelihood, negative outcomes. Communicating risk information semantically may have real behavioral consequences: presenting semantic information about the risks of engaging in a behavior may make a person less likely to engage in that behavior, whereas presenting semantic information about the risks of not engaging in a behavior may make a person more likely to engage in the behavior.
For instance, Young and Oppenheimer (2006) found that semantic risk descriptors about medication side effects reduced intended medication adherence. They presented participants with pharmaceutical advertisements that disclosed side effect risk information in semantic form (as is standard in pharmaceutical advertisements), or in numerical form (according to the actual clinically reported frequency of side effects). Although side effect information is included to increase medical literacy (Food and Drug Administration, 2004), participants in the semantic condition vastly overestimated the intended meaning of the semantic risk descriptors, were more fearful of the side effects, and were less likely to intend to take the medication compared with those in the numerical risk condition.
Likewise, presenting people with semantic information about the risks of failing to change behavior may lead people to be overly fearful and more likely to change that behavior. For example, reports such as the National Cancer Institute's ambiguous statement that smoking is “a known risk factor” (National Cancer Institute, 2005) may have led people to overestimate the link between smoking and death, and increase attempts to quit smoking (Viscusi, 1995).
The following studies have three objectives: First, to demonstrate in several domains that people are systematically biased toward overestimating their risk of semantically presented, low-probability medical risks (Study 1a). Second, to determine whether people respond differently to verbal versus quantitative risk communication methods (Studies 1a–1c), and third, to demonstrate that semantic presentation of risk information can either discourage adherence to a prescribed regimen, when the risks of engaging in a new behavior are presented, or encourage adherence, when the risks of not engaging in a new behavior are presented (Studies 1b and 1c).
Study 1a was designed to test whether people overestimate their risk for low-likelihood events conveyed with words. We asked seven physicians to list semantic descriptors that they would use for communicating risk to a patient. From this list, we chose six frequently used medical risk descriptors (Mosteller & Youtz, 1990). Fifty-nine San Francisco Bay area residents were recruited at a local shopping center and asked to complete questionnaires about medical risk. The labels were presented to participants (“some people may experience,” “there is a low risk of,” “occasionally people may experience,” “at times people may experience,” “infrequently people may experience,” “rarely people may experience”) to describe risk for various events: contracting Aspergillosis, Botulism, Cryptococcosis, Rift Valley Fever (RVF), a bacterial infection, experiencing side effects from medication, and having a child with Down's syndrome. To test the robustness or generality of the effect, we utilized a factorial design in which six semantic descriptors were crossed with seven health outcomes (e.g., “some people may experience Botulism,” “there is a low risk of having a child with Down's syndrome.”). Participants were asked to specify the relevant (low probability) percentage risk associated with each of these events. Each participant provided 42 responses (i.e., one percentage risk estimate for each of the seven medical conditions × six semantic descriptors).
Actual event risk was gathered from various sources (e.g., mean risk of contracting Aspergillosis, Botulism, or Cryptococcosis (Centers for Disease Control and Prevention, 2005b,c,d); mean risk of having side effects from medication (Young & Oppenheimer, 2006); mean risk of having a child with Down's syndrome (Stoll, Alembik, Dott, & Roth, 2005); mean risk of getting an infection from a bacterium (Mayo Clinic Health Information, 2005); and the mean risk of contracting RVF (Iowa State University, 2006)).
We ran a one-sample t-test to determine differences between actual and perceived risk. As predicted, participants overestimated the actual risk for events for every one of their 42 risk estimates (Aspergillosis, t(350) = 11.8, p < 0.01; Botulism, t(352) = 13.29, p < 0.01; Cryptococcosis, t(350) = 12.66, p < 0.01; Medication side effects, t(352) = 15.4, p < 0.01; having a child with Down's syndrome, t(352) = 8.5, p < 0.01; infection from any given bacterium, t(352) = 17.8, p < 0.01; RVF, t(350) = 4.9, p < 0.01). The “low risk” descriptor led to the least degree of bias but still led to overestimations by several order of magnitudes in some cases (see Table 1).
This first study suggests that people overestimate risk for low probability events when they are presented with semantic descriptors. We found that people perceived overestimated their risk for every one of the semantic risk labels, sometimes up to 300,000 times greater than the intended meaning of the descriptor.
Two different mechanisms may underlie our finding. One possibility is that people may already be poorly calibrated to their actual risk before receiving the verbal descriptors, and imprecise verbal descriptors simply fail to improve their assessment. Another possibility is that verbal descriptors themselves cause people to be poorly calibrated and overestimate their risk for events. Preliminary evidence collected in our lab, suggests that both of these factors may play a role in people's miscalibration to semantic risk information. Thirty-three participants waiting at a train station were either told, “the Centers for Disease Control has said that people are at a low risk for contracting Aspergillosis during their lifetimes” (verbal descriptor group) or were told nothing. Both groups then gave estimates of the likelihood that they would develop Aspergillosis. Although the “no descriptor” group's average of 4.10% (N = 18, SD = 5.09), was greater than the actual risk percentage < 0.00002%, the group given the “low risk” descriptor overestimated their risk to an even greater degree (N = 15, M = 7.31 %, SD = 12.87). This suggests that while knowledge of risk is inaccurate when no verbal descriptor is given, it is even worse when the descriptor “low risk” is added.
The verbal descriptors used in Study 1a are frequently used in health settings. They were validated by both a sample of physicians and in previous literature (Mosteller & Youtz, 1990), making the results highly relevant to actual medical situations. However, one possibility is that semantic labels may have little effect on actual behaviors or behavioral intentions. Another possibility is that the systematic bias associated with semantic risk descriptions not only affects behavioral intentions, but has predictable consequences for that behavior. Sometimes these consequences might seem benign or even desirable (e.g., avoiding ill-advised activities and complying with physician recommendations to reduce risk) but at other times the consequences might be quite serious (e.g., avoiding medications and diagnostic procedures that pose risks, but have advantages that outweigh those risks).
In the next set of studies, we seek to determine the behavioral consequences associated with using verbal risk descriptors. Because of people's overestimation of semantic risk descriptors, we predict that presenting people with semantic risk descriptors may either reduce intended adherence for a prescribed regimen, when the risks of engaging in a new behavior are presented (e.g., presenting information about side effects from taking a medication), or encourage intended adherence for a prescribed regimen, when the risks of failing to engage in a new behavior are presented (e.g., the risks of not changing one's lifestyle to avoid disease). Studies 1b and 1c seek to test this hypothesis.
In this study, we present participants with information about their risk for side effects from a medication; information that is meant to increase consumer awareness. However, presenting risk information semantically, versus quantitatively, may make people overly fearful and reduce intentions to adhere to the prescription.
Twenty-nine undergraduates were asked to complete questionnaires. Participants were randomly assigned to one of two risk framing groups (percentage versus semantic). They were asked to imagine visiting a physician who had prescribed Flonase to relieve their allergy symptoms. Participants were shown advertisements for Flonase. On the actual advertisement, it stated that people might experience headache, nosebleed, and sore throat. Clinical trials adverse event information for people on the highest reported dosage states that headache occurs 1.5 percentage points greater than placebo, nosebleed occurs 1.5 percentage points greater than placebo, and sore throat occurs 0.6 percentage points greater than placebo. We used a more conservative estimate of 4.11 percentage points greater than placebo for the numerical condition.
Participants in the semantic condition were told there is a “low risk of experiencing side effects from this medication,” those in the percentage condition were told that they have a “4.1% greater than placebo risk of experiencing side effects from this medication.” Participants were then asked (9-point scale) to indicate their likelihood of taking the drug (“Very unlikely” to “Very likely”), fear of side effects (“Not fearful” to “Very fearful”), and to state which side effects that they were thinking of when assessing their risk. Participants in the percentage condition were also asked whether they perceived themselves at high or low risk and to give a quantitative risk estimate (9 point-scale). People in the semantic group were asked to state a percentage risk estimate (compared to placebo).
As predicted, people were less likely to intend to adhere to prescribed treatment when given semantic information (M = 7.0, SD = 1.69) about their adverse event risk than when given quantitative risk information (M = 8.14, SD = 0.95). This difference was statistically reliable, t(27) = 2.22, p < 0.04). As in Study 1a, people presented with semantic risk information were poorly calibrated to the intended meaning of the side effect risk information and perceived they had an 11.4% risk of side effects from the medication, approximately five times greater likelihood than was intended by the semantic descriptor. There was no reliable difference in fearfulness ratings between the percentage group (M = 2.29, SD = 1.49) and the semantic group (M = 2.47, SD = 0.92); t(27) = 0.40, ns). However, there was a strong negative relationship between fear of side effects and compliance intentions, r(29) = − 0.48, p < 0.01, demonstrating that the participants who were the most fearful of side effects were the least likely to intend to adhere. Participants in the percentage group perceived their risk to be low (N = 14) rather than high (N = 0), and to rate their quantitative risk as being low (M = 2.77, SD = 1.09). Those in the semantic group reported an average percentage risk of 11.4% (SD = 7.33).
We recorded perceptions of side effects associated with the drug to make sure they reflected the nature of the actual side effects. Indeed, responses conformed to common side effects associated with these medications. The most popular responses were nausea, headache, rash, and diarrhea.
Study 1b suggested that presenting people with semantic information about the risk of engaging in a behavior would reduce intentions to engage in that behavior. However, if people presented with semantic risk information over-estimate their risk of developing a disease, this presentational form may increase people's likelihood of engaging in a prescribed preventative behavior. To test this hypothesis, we presented people with either semantic or quantitative information about their risk for a disease to determine whether different communication methods would affect intended adherence rates for behaviors aimed at disease prevention and treatment.
One hundred and twenty-five participants were approached at a local shopping center and asked to complete questionnaires. Participants were told to imagine that their physician has informed them that they are at risk for Aspergillosis and recommended lifestyle changes to reduce their risk. As reported in Study 1a, the mean risk for Aspergillosis is less than 0.1%. (Centers for Disease Control and Prevention, 2005b) However, we used a more conservative estimate of 1% chance for the percentage group to more accurately reflect a sample at risk for the disease.
People were randomly assigned to receive either semantic or percentage information about their risk for Aspergillosis. People in the semantic group were asked to imagine that they had been diagnosed as being at low risk for Aspergillosis. Those in the percentage group were asked to imagine that they had been told they have a 1% chance of developing the disease. Both groups were asked to respond to four items: likelihood of changing behavior to avoid contracting the disease, fear of contracting the disease, likelihood to test, and severity of Aspergillosis (9-point scales). The semantic group was also asked to estimate their percentage risk associated with the verbal label “low risk.”
Participants in the semantic group (M = 5.16, SD = 2.49) indicated greater intentions than the percentage group (M = 3.79, SD = 2.34) to change their behavior to avoid contracting the disease t(124) = 3.17, p < 0.01 (see Figure 1). The semantic group (M = 4.17, SD = 2.29) also indicated being more fearful of contracting the disease than the percentage group (M = 2.94, SD = 2.16; t(124) = 3.12, p < 0.01). The semantic condition (M = 5.16, SD = 2.35) reported a trend toward greater likelihood to get tested for the disease then the percentage group (M = 4.29, SD = 2.71; t(124) = 1.92, p < 0.06). Participants in the semantic condition (M = 2.95, SD = 1.44) rated the disease as being more severe than those in the percentage group (M = 2.24, SD = 1.53); t(124) = 1.92, p < 0.01). The semantic group attributed a 16.02% mean risk (SD = 11.94%) to the verbal label “low risk.” People who were more fearful and rated the disease as more severe were more likely to intend to test and comply with prescribed changes (see Table 2).
The results of three studies suggest that people overestimate low risk events when they are presented with verbal descriptors, and people's behavioral intentions reflect this lack of calibration. The studies suggest that people overestimate the degree of risk that the verbal descriptors were intended to convey (Study 1a) and that this bias can discourage (Study 1b) or encourage (Study 1c) intentions to adhere to a prescribed behavior, depending on whether the risks pertain to engaging or failing to engage in the stipulated behavior. The consequences of this bias might at times seem benign or even desirable (e.g., avoiding ill-advised activities and complying with physician recommendations to reduce risk), however, at other times the effects may be quite serious (e.g., avoiding medications and diagnostic procedures that pose risks, but have advantages that outweigh those risks).
Both quantitative and verbal methods of communicating risk information have benefits and negative consequences associated with their use. For this reason, researchers have stated that there is no preferred form of communication (Gonzalez & Frenck-Mestre, 1993; Merz et al., 1991; Stone & Schkade, 1991). Although some studies have suggested that quantitative descriptors are superior (Nakao & Axelrod, 1983), others have shown that semantic, non-quantitative descriptors are easier to use and thus can be more effective (Wallsten et al., 1993). The current studies do not speak to whether a quantitative or semantic presentation method is more effective in increasing the effectiveness of communication. Instead, they attempt to demonstrate that verbally presented information produces a systematic bias in decision-making. Because the American health care system is shifting away from a paternalistic structure towards one that emphasizes “consumer” decision-making (Lyles, 2002), our present findings are offered not as a recipe for manipulation of risk-relevant behavior, but as an illustration that consumer understanding, and wise decision-making, depend on presentation of information in a manner that contributes to such understanding.
One limitation of the present studies is that participants were provided with risk information for low probability events, and therefore study results might not generalize to a patient population at risk for higher probability events. Although Study 1b attempted to use higher probability events by providing participants with side effect risk information about a common medication, we acknowledge that these studies specifically focused on the systematic bias that results from providing semantic risk information about low probability events. Another possible limitation is that hypothetical situations were provided instead of an actual clinical situation. While future research on this topic in a clinical setting can build upon this work, we did pilot study physician's communication methods to provide patients with the most realistic communication styles. Furthermore, past work within medical decision making has provided participants with hypothetical situations that attempt to generalize to clinical settings (Gurmankin, Baron, & Armstrong, 2004; Timmermans, 1994).
The present research has attempted to show how understanding systematic biases can help improve medical decision making. Future work might focus on investigating other factors that interact with message format in biasing risk communication. For example, because people are more willing to accept health risk information that is consistent with their preferences (Ditto, Jacobson, Smucker, Danks, & Fagerlin, 2006; Ditto & Lopez, 2005), understanding treatment preferences may help the communicator decide the most suitable risk presentation methods. People who prefer to avoid medication may be disposed to overestimate risks pertaining to a prescribed medication's side effects compared to people who choose to take medication. Future research can take into account, and counteract, the likely interpretation bias produced by people's treatment preferences.
In the introductory example, news and health reports about SARS were misinterpreted and led to unintended, negative behavioral outcomes. This article has attempted to address problems such as this one in communicating risk information. Present findings suggest that most people grossly misestimate the types of risks associated with treatment options and that the lack of precision in verbal descriptors allows those misestimates to endure and in some cases to even exacerbate the problem. Understanding how people process risk information may help to reduce these communication errors in the future, allow decision-makers to make more informed judgments, and improve the overall process of joint decision-making.
The author wishes to thank Dr. Terry Blaschke, Lee Ross, Kristin Cobb, Benoit Monin, and the Ross lab for support and feedback.
1041 was used as a conservative percentage. We used. 041 (rather than. 04) to increase the credibility of our story.