|Home | About | Journals | Submit | Contact Us | Français|
A core logic of cancer control and prevention, like much in public health, turns on the notion of decision-making under conditions of uncertainty. Population-level data are increasingly used to develop risk profiles, or estimates, that clinicians and the consumer public may use to guide individual decisions about cancer screening. Individual risk perception forms a piece of a larger social economy of decision-making and choice that makes population screening possible. Individual decision-making depends on accessing and interpreting available clinical information, filtered through the lens of personal values and both cognitive and affective behavioral processes. That process is also mediated by changing social roles and interpersonal relationships. This paper begins to elucidate the influence of this “social context” within the complexity of cancer screening. Reflecting on current work in risk and health, I consider how ethnographic narrative methods can enrich this model.
There has been great consternation about the advent of a risk society in various literatures, some more critical than others but many greatly oversimplifying the processes by which individuals make sense of “risk information” as it is presented to the general public (Forde 1998). Much of science has focused on mechanistic pathways emphasizing cognitive rationality, while under-valuing the contributing dynamics of affect and other non-rational processes. Cancer screenings, and the attendant technologies that make screening possible, represent something new and different in so far as this social practice emerges out of specific medico-scientific techniques (DNA amplification, for example), growing knowledge of the carcinogenic process itself that enable clinical scientists to identify pre-neoplastic lesions earlier, and the interaction of this knowledge and technique with organized social bureaucracies like managed care, cancer registries or national health systems. The process of risk determination itself, however, remains a complex human behavior involving the interaction of a primarily cognitive assessment of component factors with affective and attitudinal mediators that feed into that ordered, rational assessment. We make sense of this new information and prioritize attributions in the context of existing behaviors and social contexts. Thus, independent of access and accuracy of screening modality, the success of cancer screening depends on how people recognize and relate to ideas about the possibility of developing cancer at some point in their future. Those attitudes affect actual screening behaviors (Han, Kobrin et al. 2007). The key conceptual operation behind a strategy of early detection pivots on the successful translation of risk estimation from population to individual. However, while population screening rates are an aggregate of individual behaviours, individual behaviour is not an equivalent distillation from the group.
Medical and public health literatures tend to approach knowledge and beliefs as two separate phenomena where the lay public “knows” about biomedical information but “believes” in folk models of how illness manifests or might be addressed (Pelto and Pelto 1997). Similarly, commentary about how people approach risk information, particularly as it has been framed in the growing literature on personal risk estimation, replicates this artificial binary. Risk information is a particular distillation of various elements given significance in the course of medical care. Lay confusion about how this information is assembled and what it “means” persists, in part, because individualized risk estimates are abstractions inferred from population data, though they are presented to individuals as personal. The very notion of an individual risk estimate could be considered an oxymoron: the models that generate an estimate are based on retrospective population data. Indeed, risk factors (i.e. White/Black, smoker/non-smoker) are themselves derived from population analyses; the whole notion of individual risk is thus tied to population surveillance (Armstrong 1995). And yet these risk models have been used to educate lay people, to identify those who might benefit from prevention programs, as well as in patient counseling and clinical decision-making (Han, Lehman et al. 2009).
This reflection paper builds on a qualitative study initiated to inform the development by National Cancer Institute investigators of a new colon cancer risk prediction model (Freedman, Slattery et al. 2009).i That study recruited 48 adults in two US metropolitan areas to participate in 8 focus groups. Participants were selected to represent individuals age-appropriate for most cancer screening interventions: we sought out people with average levels of exposure to health information, but who lacked any extraordinary concern or expertise regarding cancer risk. Focus groups began with open-ended discussion about the meaning of risk and of cancer risk. Facilitators provided a brief explanation of the new cancer risk model that could calculate lifetime risk of colon cancer for an individual, based on nine listed risk factors. Participants were presented with the case of a hypothetical friend who had received a risk estimate of 9%, and asked to explore what this risk estimate meant. A moderator facilitated discussion of participant interpretations of a numeric point estimate, a numeric range, and a verbal comparison of individual relative risk to population average. Not unexpectedly, participants indicated that risk was associated with danger and emotional threat, and were less inclined to think mathematically about risk. Participants explored their uncertainty about the model and about point estimates, noting in particular concerns about missing data, limitations in accuracy and source credibility. Detailed findings are reported elsewhere (Han, Klein et al. 2009; Han, Lehman et al. 2009).
In this paper, I want to explore further the ideas that participants in these focus groups raised concerning discontinuities between what has come to be thought of as subjective and objective conceptions of risk, that is, between how risk is perceived and the risk information that is provided in part because these differences in belief-type risk and frequency-type risk do not seem to form discrete, bounded concepts. As we documented in our focus group analyses, it is common to hold both notions of risk simultaneously. Lay people, not only patients, manipulate different mental representations of risk; they move back and forth between types. In my own anthropological work, I have increasingly observed that these mental representations are filtered through people's changing social roles and interpersonal relationships, real and imagined. What I am calling “social context” then is inflected with personal projections into the past, present or future with accompanying affect and values. These color how a person then interprets a risk representation. This paper begins to elucidate the influence of this “social context” within the complexity of cancer screening as one particularly challenging instance of medical decision-making. Reflecting on current work in risk and health, I consider future research with ethnographic/narrative methods can enrich this model.
The contemporary ubiquity of notions of risk has been well-documented across literatures, including anthropology. Mary Douglas has noted that the ways in which people understand this central concept has changed and shifted (Douglas and Wildavsky 1982; Douglas 1985; Douglas 1990). Risk was initially a value-neutral construct reflecting raw probability that became associated with consequences then increasingly connoting only adverse consequences. For example, the socket of the lamp on my desk reads: WARNING, Risk of Fire, Use Only Type A, 150 Watt Lamp(s) Maximum. In the clinical domain, risk refers mainly to possible morbidity or mortality, “adverse events” or side-effects, and is also used as a marker of attributes as in “risk factors.”
That risk has become an almost ubiquitous notion is not to say that there is agreement or specificity within this common-sense construct. Past studies have sought to ascertain how lay people translate verbal descriptors of probability -- always, usually, likely, less common, occasionally, small chance, rare, never: terms commonly used in clinical conversation and written communication -- into numerical quantities. On examination, we find that patients do not agree about the numerical meaning of such words and, in fact, each word elicits a wide range of interpretations (Sutherland, Lockwood et al. 1991). Perception of risk is also subject to bias that results from heuristic thinking. First, heuristics are pragmatic cognitive short-cuts formed on the basis of past experience, or in the case of cancer, the prior absence of a cancer experience in an individual's life (Finucane, Alhakami et al. 2000). Second, cancer is perceived as a catastrophic hazard, a condition that is difficult for an individual to estimate accurately and that triggers our cognitive efforts to cope by asserting a sense of control over this uncertain possibility (Kahneman, Slovic et al. 1982). Thus, fear of cancer can produce a range of reactions in different people, ranging from outright denial to action-oriented compensation like care-seeking, or improved adherence or compliance (Consedine, Magai et al. 2004; Carpenter 2005). What is not clear is how earlier awareness (that creates fear) acts to prime sensitivity to risk. Proximity to such a vivid hazard, however, such as cases of cancer in family, friends or co-workers, may impact that sense of perceived risk. Proximity increases salience and would impact awareness as an earlier exposure. And indeed, some studies indicate that having a female relative with breast cancer reduces nearly two-fold the likelihood of respondents to estimate themselves at lower risk than other women (Facione 2002). This type of finding becomes increasingly important in light of the range of lay knowledge and attitudes about familial (inherited) cancer, both actual and perceived.
For example, in the course of fieldwork in an unrelated pilot study into the cancer care delivery system in North Texas, I recently encountered a middle-aged man that I'll call Jorge.ii Jorge had no personal experience of serious disease, but lived with a complicated history of minor ailments that he seemed to manage well despite depending on relatives to compensate for continuously unstable employment. When he talked with me at the bus stop outside of the county hospital, Jorge was mulling over a clinician's comment about colorectal cancer screening.
Jorge: well, I used to think you can't worry about something you can't do nothing about…. But my brother-in-law, he had cancer. They told him last year. He died, you know…. They treated it [with chemo] but it moved fast…. I don't know; it's all a mystery to me.
Over the few exchanges we had, I learned that Jorge was worried because he had not thought about something as serious as cancer before. He juggled a lot of other concerns that were more pressing. But the conversation with his clinician about routine colorectal screening had become a source of worry because Jorge's brother-in-law had been diagnosed with lung cancer the prior year. Though recognizing these were somehow different diseases, Jorge perceives now perceives cancer to be more salient in a way it had not been before – again, suggesting the availability heuristic (Kahneman and Tversky 1973). The offer to screen re-personalized an awareness of cancer risk made proximate by the death of his in-law.
Perceived risk is strongly affected by proximity in the case of purported “cancer clusters”, though in this instance, operating at the level of community. The persistent belief in “cancer clusters” demonstrates how the uptake of risk information depends on the social context in which the information is communicated. Perceived cases draw attention to a locale and beg for a common explanation, but the public often disregards disease heterogeneity, how common many cancers are, and often do not understand the effects of random chance and correlative factors as well as the retrospective nature of cluster identification (Benowitz 2008). Similarly, when notions of risk are propagated through simple percentage point estimates (e.g. 7% risk) without additional explanation of how such a number was generated, public health recommendations and programs concentrate the locus of risk at the level of the individual cum decision-maker. This not only obscures the multi-causal complexity of carcinogenesis, it renders opaque the broader context through which risk information is filtered.
Earlier studies have raised the possibility that there are context effects, for example, variation in the clinical setting, or indeed in certain types of medical conditions, that influence how both lay individuals and physicians assign numerical meaning to verbal descriptors (Mapes 1979; Sutherland, Lockwood et al. 1991). The point of thinking about the meaning that people ascribe to notions of risk is to improve our grasp of how such understandings shape what people do with respect to preventing cancer. The concept of “risk factors” most commonly presupposes a differentiation (who has them, who doesn't) between who is at risk, which is short-hand for a comparative assessment of defined groups of individuals. In this sense, risk depends on probabilities that are only possible because they are derived from collectives, or populations (Spasoff and McDowell 1987; Hayes 1992). Let us consider how, then, an individual applies information to make sense of “risk of cancer” that might shape decisions to engage in preventive screening.
Cancer registries permit an epidemiology that produces excellent population risk estimates, and the surveying and statistical calculations secure generalizability. However, at an individual level, inference is still founded on correlation not causation in determining the risk of a particular mail-carrier, whom we'll call Ms. Angeline, for developing colorectal cancer. Most people have difficulty contending with the uncertainty implicit in concepts of risk and the numerical meaning of risk estimates. A cancer event in the life of Ms. Angeline is still actually dichotomous: either she develops dysplasia or she does not.iii In this lies the crux of the problem when we engage individuals through risk estimation models. Objective probability does not apply to an individual thinking about risk for a single event. Objective probability only emerges from repeatable events, either in time (by a single individual) or a space (across a population of such individuals) – people do not necessarily understand this in these terms but we see that they reflect this problem when they correctly explain a 10% risk of cancer as “one in ten” but also wonder whether or not they are “one of those ten people” (Han, Lehman et al. 2009). Lay people were very willing to engage second-order risk, a dual understanding of risk. For example, in our focus group discussion, Angeline understood an estimate to express both the proportion of an event offering in a given population (frequentist) and a degree of confidence about the future occurrence of an event (Bayesian):
Q: Why do you think that's low, Angeline?
Angeline: It's low; I don't know how to explain it. But 9% of 100 would be 9 people out of 100; so you would be one of the 9 possibly. But I don't know the answer. That's why I said it appears to me that you have a slight chance.
The earlier example of the salience dynamic exemplified by Jorge's new concern reacting to his brother-in-law's death is further complicated by findings from a research team in Bristol conducted among first-degree relatives suggesting that families to explain away risk even within a family history of cancer with appeals to lifestyle differences and other behavioral traits (Sanders, Campbell et al. 2003). Thus, though familial proximity can create salience that influences the availability heuristic, additional defense mechanisms can encourage exception-seeking that our focus groups suggest accompanies lay people's cognitive efforts to interpret numerical estimates. Angeline's comments, then, are not only about conceptualizing probability but that interpretation may also be interacting with how a given probability relates to an individual. For example, Owen explains:
Owen: It's a matter of getting so much data that you don't have to, that it automatically breaks down. Jones is a 67-year-old, you know, black male who stopped smoking 10 years ago. Well, there were thousands of other 67-year-old black males who live next door to Dave who stopped smoking 10 years ago. There are enough of them in the sample, if there are enough of them in the sampling, you come up with stats where I don't think you have to worry about being so narrow, where I think on the contrary it has been so wide that it automatically limits to that particular type person, that particular individual. He's got these characteristics. History shows everybody of that same type over the last 50 years that we have records, of those 9% developed colon cancer. …. It's not just demographics, it's the number of demographics. Just make it narrow so that you make sure I'm in this group, and don't include a bunch of people who have nothing to do with my life. I don't do that [lifestyle behavior], I don't do that [lifestyle behavior], I don't do that [lifestyle behavior].
Most people don't see risk as a neutral statistical statement but think of risk as indicating danger and emotional threat. To most of us, cancer risk is not about mathematical probability but is about concrete, if not immediately tangible, risk factors. Thus, in many cases, the possibility (risk) of something happening (outcome) derives some of its meaning from whether or not there is anything Jorge or Ms. Angeline can do about it. That is, assessment depends on whether the risk factor is modifiable like diet or exercise, or non-modifiable like particular family history or, perhaps, race. As we saw in our focus groups, the salience of a proffered risk estimate depends on the relevance of the explanatory model that Ms. Angeline understands produced that estimate. If she determines she shares risk factors with that model, that is, if she perceives the source of risk as similar to actual factors active in her world, then the risk estimate gains significance. Put another way, if she imagines the “population” used to model the estimate as having things in common with her, Angeline is more likely to perceive the estimate as “true” (Han, Lehman et al. 2009).
Anthropologists and sociologists have thought a great deal about the sick role (Parsons 1951), the emergence of a patient identity and, increasingly, its relation to notions of risk and susceptibility. Some scholars have proposed that the idea of being “at risk” creates an intermediate identity between healthy and sick, that is the not-yet-sick or not-yet-patient. When we think about the role of disease surveillance in the lived experience of people trying to understand their personal relationship to cancer risk, we have to recognize the dynamic between individual psychological and affective perception of cognitive notions of risk and the ways in which that inchoate risk has very concrete implications for individual behaviors and social systems of care (Joseph, Burke et al. 2009). In our own studies, similarly, we saw respondents like Owen deploying contextual variables in his efforts to volunteer exceptions that might exempt them from the hypothetical risk estimate proffered to lay focus groups (Han, Klein et al. 2009; Han, Lehman et al. 2009). Similarly, Fred and Mike argue about the relative applicability of the estimate based on a population that might or might not be like him, as follows:
Facilitator: This is your [estimate], this isn't everybody's. This is yours.
Fred: No, no, no, no. Because this study is going to give a percentage of people over 50. I'm one of those. So that's not me.
Mike: Yes, it is.
Fred: Yeah, it is to a degree, but my mindset is that it's not me because I escaped that, from other tests. I've had tests. I know I fall within that range. I am within that range, I'm in that population that is capable of having it.
Taken together, such data reminds us that risk information is rarely taken up as value-neutral objective truth, but rather risk information is deeply subjective, interiorized against a pre-existing sense of self. This might range from broad psychological characteristics like the general way in which we perceive newness or change as threat, or how we process any new information to roles shaped by social positionality: “I am a caretaker not someone people take care of”; “I am the decider, I need to act and lead”; “I'm not sure; my wife makes these kinds of decisions”(Washington, Burke et al. 2009). These phrases are loose colloquialisms but we might think further about how social expectations set up behavioral character conditions. The introduction of new risk information then either aligns with that conditioned response or ruptures it.
A study in Britain serves as a good example for elucidating this dynamic in the context of the health care delivery system. The study examined individuals who had been referred to a British regional cancer genetics service to receive a risk assessment by either their general practitioner or a secondary care physician (Scott, Prior et al. 2005). In Britain, the result of the risk assessment serves a triage function: only “high” and some “moderate” risk individuals go on to gain face-to-face consultations with clinical geneticists and possibly additional screening services.
As Scott and colleagues argue, the cancer genetics testing service seems to serve as a mediating agent between the anticipation of becoming-patients and the expectation of attention and services from the healthcare system. Their more challenging observation, however, is that those individuals who suspected themselves as being at high risk for inheritable cancer prior to clinical assessment are dissatisfied, almost disappointed, upon learning that their estimate revealed them to be at only low or moderate risk. The researchers interpret this to reflect the individual's efforts to redefine herself within the purview of the healthcare system. They suggest the dissatisfaction is a result of the common desire to assert control in the face of uncertainty/new information, in that addressing perceived risk of heritable cancer creates the expected claim on perceptual resources to address that risk. The notion of being “at risk” is received against a pre-existing expectation, the authors argue, that it would be, in effect, helpful to be “at risk” because that would elicit special attention in the form of referral to a specialist. Thus, the risk of cancer itself is also interpreted through a larger, pre-existing framework in which British people recognize their need to advocate for care within the bureaucracy of NHS or to increase personal vigilance to compensate for a system that doesn't perceive their risk as warranted. Moreover, the technological imperative is increasingly acculturated – not only are risk estimates available but health and medical care services are now often re-organized to support the application of risk estimation, as cost-benefit data is used to rationalize available services, whether or not any individual actually chooses to act on this new risk information (Aronowitz 2009).
Larger structural differences like the nature of health systems, then, also contribute to the social context within which an individual is exposed to risk information. In the absence of organized structured universal system, as in the US, screening falls in a social realm of “choice” where screening policy takes the form of recommendations to physicians and may influence insurance reimbursement designs. Of course, a screening procedure for cancer is not given involuntarily in either societal setting but universal systems routinize education thus affecting awareness, as well as increasing access and uptake generally than more market-driven situations.
The different ways that “risk” is perceived or interpreted highlight the significance of lay theorizing about both cause and effect. As we think about surveillance as a structural vehicle that instantiates ideas about risk in policy, it is all the more crucial to recognize that “risk” is not an independent construct but a perceptual process by which we interpret information through already-operating understandings of our life-world. We actively form the meanings of abstractions like risk through our standing priorities and our broader moral geographies, both informed by the emotional valence we attribute to them (Moscovici 1984). In this way, health information about risk is caught up in pre-existing frameworks of good/bad, danger/safety or clean/dirty and the other various layers of contrast and opposition a society uses to organize our mental schema (Douglas 2002; Pasick, Barker et al. 2009).
To further develop the contribution of affect or emotion to how people relate to risk information would engage a much larger literature than is possible here (Halpern and Little 2008). Suffice to set out a broad continuum of possible cognitive sets. If Ms. Angeline were facing “news” of her risk of cancer, she might hold an unwavering conviction that she would develop the disease, making her unresponsive to appreciating downsides of screening (finding a polyp or dysplasia requiring a treatment regimen) etc. Such unwavering conviction in the outcome can be contrasted with a fixation on the good outcome, without any understanding of the likelihood of that outcome. What is often easily framed as optimism is, phenomenologically, a much more complex and idiosyncratic amalgam of hope, fear, denial (Van Ness 2001). Each of these tangents is refracted through implications Ms. Angeline has drawn from the initial presentation of the risk construct, say, at her annual check-up with her primary care physician.
Clearly, this complexity is a function of compartmentalization, if you will -- selective “spotlighting on the theatre stage” of individual awareness. Generally, compartmentalizing is a useful coping mechanism that enables an individual to get on with daily functioning in an environment of constant flux and shifting information by imposing an algorithm that lets him parse the flow of sensory data down to actionable units. It is not clear, in this sense, whether people's emotions actually incapacitate their ability to understand risk/evidence (prevention) or actively help them avoid understanding (protection, as in denial) or actively re-align their values such that understanding is irrelevant because another issue takes precedence (pre-emption) (see also (Halpern and Arnold 2008). In any case, risk information is mediated by this set of mental mechanisms that may well be automatic but which nonetheless depend on the particular situation of a given individual at the time that risk information is introduced. However, to the extent that all three response constructs (prevention, protection, pre-emption) engage aspects of uncertain futures and accompanying notions of risk, this complexity can inform how we think through the dynamic relationship between individual risk perception, future uncertainty, and population-level screening for cancer.
Decision-making practices are not simple dyadic, single-event phenomena though it is easy enough to address them that way. Though medical care structures may not be conducive to recognizing them as such, decisions are actually the product of an iterative process of information assessment over a series of encounters with both human and non-human actors, including the medical pronouncements that precede or follow diagnoses (e.g. risk estimates of 8% or 1 in 3 chance) and the care relationships experienced over time between a clinician and a patient (Rapley 2008). Individual risk estimates are only one such component of this larger process as individuals negotiate their relationship to their bodies here and now, and in the uncertain future that might contain cancer that might be prevented. Just as we are not atomistic in our lived experience as autonomous subjects (Tauber 2003), our estimation of risk or our more complex relationship to possibility and to the future cannot be reduced to one encounter with a personal risk estimate.
Surveillance medicine links notions of risk to our individual lives through the enumeration of risk factors and a calculus of probability. But these dynamics operate within a larger moral economy of relationships between knowledge and action that are framed in and through social norms (Pasick, Barker et al. 2009). As others have argued, the contemporary relationship between life and risk may be a defining characteristic of our modern world. People's reactions to risk estimates and discussion of cancer risk more broadly are part of a moral economy that positions health and disease as products of choices: the choice to exercise, to eat right, to periodically examine parts of our bodies, to see a physician, to screen for cancer.
Providers and health educators need to understand that there are real limits to our capacity to “improve” patient risk perceptions. Risk information does not always lend itself to changing people's actual behaviors with regard to cancer screening decisions and thus cannot directly impact population rates. Although a given patient like Ms. Angeline may appreciate the cognitive dimension of what her provider is telling her, she will have her emotional response both to the information and to the clinician and the visit itself, just as Jorge did. That set of affective/attitudinal responses will frame how those cognitions (about the information) are processed. It is the framing that seems to lend meaning to decisions about her future.
Across industrialized nations, although not uniform, the public uptake of medical testing for the purposes of screening for cancers has been relatively rapid and steady. In part, this is due to a confluence of factors within health systems in each national milieu: concerted efforts by medical professionals and public health officials, increased visibility promulgated by disease advocacy groups and various media campaigns, the advent of the internet, as well as reimbursement infrastructures like managed care or national insurance. In the United States, even in the absence of organized national screening programs, early detection has been well-received-- however these high rates come with unintended consequences and considerable social cost, including more unnecessary testing both with false positive, over-diagnosis (e.g. lead-time and length-sojourn bias), higher prices for screening, and inefficiencies in delivery that contribute to population disparities(Breen and Meissner 2005; Welch, Schwartz et al. 2006; Welch 2009).
A core logic of cancer control and prevention then, like much in public health, turns on the notion of decision-making under conditions of uncertainty—specifically, the capacity to predict the likelihood of future states given current trends, deduced from the aggregation of relevant traits in a defined population. Within these abstracted aggregate populations are actual individuals, perhaps a patient like Jorge, seeking to make sense of individual circumstances and the role of any given technological “advance” to contribute to his personal, and potential, well-being. Here, let's make the pragmatic assumption that scientific or mathematical expertise renders some “true nature of risk” accessible to scientists and the task at hand is simply one of public health and medical application: communicating that risk to people in order for them to make informed decisions to protect their health.
Such decision-making depends on accessing and interpreting available information, clinical and otherwise, filtered through the lens of individual values and both cognitive and affective behavioral processes. “Personalized risk estimation” is being promoted as an effective vehicle for physicians to communicate risk to their patients, particularly as people expect ever more personal responsibility for involvement in prevention and care activities. However, personalized risk estimates do not consistently increase the number of decisions to be screened nor do they seem uniformly to improve informed decision-making itself (Edwards, Evans et al. 2006). Further, screening outcomes have consequences and many patients manifest a strong aversion to even the possibility of “side effects” from subsequent treatment regimens (Waters, Weinstein et al. 2007). This aversion can be strong enough to powerfully bias decision-making future possible states.
Choices-- that is, acts—require patient and provider to relate abstractions about the future to concrete current conditions of the self and life experience. The emergence of individual risk estimates calls us to consider how individuals relate their own possible future to a population's past performance (for, as we know, past performance is no guarantee of future return). Comprehending risk information is constrained by numeracy and probabilistic reasoning (Schwartz, Woloshin et al. 1997), but risk comprehension is strongly influenced by notions of agency and social role. How might such factors contribute to decision-making dynamics with respect to screening for cancer?
Public health would hope to screen as many people “at risk for disease” as possible (those who should be screened) while minimizing those who are screened unnecessarily (Welch 2009). For example, a 37 year old woman might be screened for breast cancer and her mammography is negative. For the individual, that mammography produced a good result. As a population health measure, that screening case was unnecessary because accumulated data suggests there is minimal overall benefit for screening women under 40. We seek to maximize the cost-benefit of screening by identifying sufficient numbers to reduce future cases of cancer without introducing avoidable harm and expense for people who would not have gone on to develop cancer at all but who are nonetheless swept up in screening efforts. The problem of discriminating between who should and who should not be screened, of course, comes back to individual case decision-making between a person and her physician: perhaps, first, whether a test is offered, and then, whether a patient opts to take it. This is further complicated by the policy environment- where there is strong evidence of a population benefit for screening, a public health approach either promotes the test, or makes the offer routine. Broad age-based cut-offs provide the first filter, followed by an algorithm of risk factors we have already touched on, such as cancer in a first-degree relative or smoking. Demonstrated disparities in certain populations may also produce recommendations to screen at younger ages, for example, among certain ethnic or racial minorities. These might be sub-population correlates linked to disease physiology, for example “triple negative” breast cancer, a variant found with higher frequencies among young African American women as compared to White women, or limited access/ later diagnosis effects that have encouraged national guidelines calling for earlier age-point screening for colorectal cancer in African American men. Below the policy level, clinicians are still expected to endorse public health measures and help patients determine what screening is appropriate to their individual and personal circumstances, including whether or not insurance coverage will pay for the screening or if it will be paid out-of-pocket.
Interestingly, individual efforts to rationalize how risk estimates do or do not apply to their particular cases follow parallel lines of reasoning for public policy formation. In setting policy in public health, this reasoning often hinges on how the precautionary principle is employed (Weed and Gorelic 1996; Weed 2004). While the specific conditions that would demonstrate conclusively that correlation was causation may not be met, the pragmatist seeking to protect the public health uses causal inference to assess the risk relationship as it pertains to his target, the population (Weed 1997). In a parallel fashion, Fred and Mike will try to determine the intrinsic value of a risk estimate to his domain of action, his individual situation and the choice to screen and then, sometimes, to treat… Colonoscopy is arguably both screening and treatment since colposcopy cannot happen without scoping and no physician will scope and ignore a polyp.
More recently, larger sample analyses have shown significant differences in how cancer-associated risks are perceived. For example, in one large study, my colleagues found that Black women more accurately perceived breast cancer survival and the benefit of screening mammography than white women (Haggstrom and Schapira 2006). Further, Black women were more pessimistic about the relationship between screening and survival. There is significant debate in the literature about how these perceptions may be shaped by larger cultural frameworks specific to the sub-population in question. For example, some studies have indicated that attitudes, like the behavioral construct known as fatalism, have a negative effect on cancer screening behaviors, while others suggest there is no measurable effect (Chavez, Hubbell et al. 1997; Laws and Mayo 1998). However, important caveats remain concerning how we understand such attitudes given the variation in social location impacting access to services or awareness of differential sub-population morbidity and mortality. In this sense, what we are picking up as pessimism or fatalism may reflect structural realities that translate into different attitudes and behaviors toward screening rather than inherent differences between cultural groups. This argument is important because it demonstrates the challenge of the aggregation fallacy-- that is, thinking that this population information is a unidirectional sum of individual behaviors. In fact, individual attitudes determine individual healthcare decisions but they exist within a larger feedback loop across both time and space. Past behaviors roll up to shape group attributes that, in turn, inform population health outcomes that are used to set health policies that guide individual behaviors. So, when Ms. Angeline comes in to see her physician later this year with questions about mammography, she will hear guideline recommendations based on analyses of “women like her” but Ms. Angeline will process her decision to screen through the lens of her own experience and those of her family and friends, as well as her clinician's firm recommendation. It is worth noting that lay people, clinicians, and epidemiologists can mean different things by the phrase “women like her.”
We might also consider medical guidelines and doctor's recommendations as contributing factors to determining proximity and thus salience. As people reach for objective information to anchor their efforts to re-assert control about uncertain futures, competing guidelines impact how individuals judge risk and subsequent decision-making. For example, we know that conflicting guidelines from various medical sources create ambiguity that is associated with both population up-take of mammography and lower intentions to screen in the future. Further, higher rates of perceived ambiguity also predict greater worry related to mammography (Han, Kobrin et al. 2007).
Decisions to participate in screening are fundamentally different from decisions to undertake clinical treatment or clinical research. They differ in their relation to an uncertain future shaped by, or even more vaguely, gestured to by the presence or absence of risk factors or the broader suggestion that someone is more or less “at-risk.” Screening requires an individual to crystallize that uncertain future—as a determination to act on that decision to screen appears to bring that future uncertainty into being in the present. Maybe you were at risk for colon polyps given your age and diet but actually using the SDT-2 fecal DNA test will solidify that risk in the sense that the test will indicate the presence of both cancer and pre-neoplastic lesions with clinical (pre-determined) reliability. In my conversation with Jorge at the bus stop, his clinician's invitation to screen brought the “risk of cancer” into his present, weighed down by the spectre of his brother-in-law's death last spring.
The tendency to equate knowledge with certitude is particularly characteristic of our time (Giddens 1990); with regard to cancer risk, it is easy to mistake statistical knowledge for a sense of control -- liberation through more choices, like the choice to screen -- or otherwise reducing uncertainty only serves to further reify our sense of self as autonomous subjects (Tauber 2003). But this sense of “knowing your risk” can also be protective in its ability to pre-empt an adverse future. As the work of Scott and colleagues suggest, we might consider screening surveillance as a comfort in the sense that preventive screening acts to reduce the sense of “chance” as random hazard by providing a definitive action in the future that would indicate greater certainty. That is, the unknown quality of an uncertain future (cancer) is mediated by the promise of identifying it early (Aronowitz: 434). Knowing that you are being monitored through screening can anchor a modicum of certainty in the face of a broader uncertainty of the possibility of developing cancer: “I might get it, but if I do, I will know when I do.”
For the purposes of this paper, my interest in understanding the dynamics of risk and population surveillance set aside the question of the true reality of risk. That is, I bracketed questioning the objectivist or realist stance that risk is a really out there as a characteristic of the natural world, that its nature is therefore independent of human perception. In other epistemological discussions, we might explore constructivist critiques to argue how context or culture is entirely responsible for setting in motion a normative system by which events are organized in a values calculus such that risk may be entirely culture-bound (Malaby 2002; O'Byrne 2008). That said, patients and their providers are clearly bound up in culture: people manage risk information and screening decisions through behavioral mechanisms and social pathways that can be studied, theorized, and intervened upon.(Hay and Craddock Lee 2009)
As expert knowledge purveyors, medical practitioners may be well-regarded authorities. But the information they pass on, while objectively legitimate in its abstraction, will still be interpreted through the lens of patient experience and given meaning in the context of that patient's life. Thus, it would be a mistake to interpret public ambivalence toward public health efforts like screening as “willful ignorance” (Wynne 1995). Instead, each member of “the public” is engaged in translation as we try to determine whether information reported for a group matters to each of us. Prevention efforts will be stymied if we persist in thinking of lay responses to risk as either accurate or misguided. This is complicated by our tendency to see clinical information about risk as wholly objective (McGoey 2009). In fact, an estimate is even more explicitly constructed and, when applied to an individual case, it has no meaning without interpretation. That interpretation is both intra-psychological and social but it is not a distortion of “truth” or the really real. Rather, that assignment of meaning represents reality for the would-be patient.
Various other lines of investigation into risk and uncertainty, including several appearing in this journal, have sought to unpack the more taken-for-granted, even objectivist, set of assumptions about the nature of risk in people's lives. Psychologist Paul Slovic earlier began to dismantle the binary of rational/irrational to elucidate a nuanced view of risk assessment in which power and institutional authority is mediated by trust, and began to point the field toward more democratic engagements with the construction of risk that increasingly engage social dynamics (Slovic 1999; Slovic, Finucane et al. 2004). Empirical studies have further sought to examine the intermediary ground between rational and irrational poles by testing social strategies like trust, intuition and other affective dimensions (Zinn 2008) as well as how emotion cues retrieval and colors the qualitative “bottom-line gist” of how risk information is summarized (Reyna 2008). Tim Harries tested emotion (affect) undergirding a construct he derives from Giddens' “ontological security.” This need for stability, Harries argues, is reflected in efforts to maintain social representations of safety, perhaps also complicit in denial and cognitive dissonance avoidance, and to reduce anxiety produced by growing suspicions that otherwise expected structures of authority which support personal well-being are themselves at risk of failing (Harries 2008).
Broader social contexts and various factors of local situated-ness interact with larger cultural meaning, as well as the values that shape people's identities and their moral commitments (Sanders Thompson 2009). Ethnographic methods, particularly qualitative inquiry extending beyond formal focus groups, are crucial to elucidating the complex processes that enable individuals to arrive at judgments, on their own terms, about what risk and possibility mean to them. Though the focus groups my colleagues and I conducted were invaluable to advancing our analysis of numeracy and risk perception in the face of individualized risk estimation, they fall short of the depth that more open-ended, extended interviewing could achieve when informants are encouraged to share narratives about their own lives and circumstances (Henwood, Pidgeon et al. 2008).
The ethnographic interview can be particularly useful for elucidating how individuals make sense of information in the context of the values and obligations that shape their social lives. Henwood looks to anthropologist Teresa Satterfield's techniques of eliciting narratives to reveal what particular survey responses actually mean through the analysis of subjective experience, especially those emotion-driven commentaries that reveal the particularity inherent in a personal point-of-view (Gregory and Satterfield 2002). Further research has taken advantage of narrative form, finding that informants assimilate background detail and are better able to assess values leading to judgments, that is, potential decision-making (Satterfield, Slovic et al. 2000). Thus ethnographic interviewing may not only provide greater information about social context to the interviewer, but may also present an intervention, for example, when risk information is presented and the informant is prompted to incorporate this new information into their life story (emplotment).
Similar techniques have been used to examine, for example, early breast cancer symptoms and other domains of cancer survivorship research (Facione and Giancarlo 1998; Kreuter, Green et al. 2007). Though some work has been done in the public health context, particularly in the field of environmental/ecological risk, we have not seen a comparable application of ethnographic narrative analytics to understanding risk with respect to cancer screening. While others have focused on the relationships between risk and genetic potential I would suggest that cancer screening is the site of similar challenging uncertainties (Press, Fishman et al. 2000). New technologies, like the SDT-2 fecal DNA test that may supersede Fecal Occult Blood Testing (FOBT) for detecting colorectal cancer, could increase accuracy and be less invasive than other modalities such as colonoscopy. However, greater resolution and precision may only add to the complexity that adheres to the lived experience of increased risk awareness. As the risk paradigm expands to ever-broader populations available for screening, public health policy makers need to appreciate the dynamics of coping, self-realization, individual psychological functioning and social interaction that impact participation in cancer screening, even as those dynamics are themselves effects of individual engagement with risk-acceptance and/or risk tolerance (McCaul, Branstetter et al. 1996; Brewer, Chapman et al. 2007).
A randomized trial of a tailored system to facilitate risk appropriate cancer screening could mount sub-studies where qualitative researchers could conduct one-on-one ethnographic interviews to draw out risk narratives. Behavioral interventions that target minority Medicaid managed care populations, for example, offer rich opportunities to use ethnographic interviewing to better understand the ways in which structural deterrents, social location, and resource availability interact with personal values, as well as constructs like self-efficacy, in diverse and vulnerable patient communities. Importantly, narrative techniques can draw out how respondents perceive and respond to risk in the course of daily lived experience. Data derived from ethnographic interviews could be used to test the effects of particular social roles (e.g. parent, care-giver, dependent) against constructs like locus of control, or prevention-protection-pre-emption could be integrated with existing models of informed decision-making, especially numeracy(Rothman 2009). Understanding the mediators and moderators of affect within social models of risk perception would contribute to a more robust understanding of the inter-personal dimensions of the dynamics that inform decisions to screen. Measurable constructs from behavior change theories could be used to structure ethnographic interview strategies to integrate the complexities of how people engage with risk in their lives within existing models of screening uptake behavior.
Prevention and public health are not just rules and regulations about right choices or right behaviors. Like clinical guidelines, they are values concepts that reflect what kind of society, what kind of health system, even individual health professional, we want to be. These values shape the advice and counsel that patients are looking for when they want to make decisions about their care. At the societal level, we want not just to reduce morbidity and mortality but reduce and alleviate suffering. Cancer screening is a critical component of that agenda but it cannot come at the expense of reductionism. We need to understand what ties individual decision-making to those outcomes we gather to create risk estimation models in the first place, in order to ask a patient, like Jorge or Ms. Angeline, to understand his or her personal risk of cancer. If we are to succeed in implementing societal cancer prevention objectives, we need to understand the intersubjective nature of risk and decision-making through the application of theory and methods that will elicit “social context” factors.
iThe author acknowledges extensive discussions with Dr. Paul Han and other study co-authors in conjunction with the research collaboration supported by the NCI Division of Cancer Control and Population Sciences through Dr. Andrew Freedman (see Han et al 2009, 2009). The author is grateful for past individual support from the NCI Office of the Director, Cancer Prevention Fellowship Program (2004-08).
iiThe author acknowledges on-going support, in part, from NIH/NCRR grant UL1RR024982-03 (Packer) that also sponsors the North and Central Texas Clinical and Translational Sciences pilot award to the author for fieldwork discussed here.
iiiMs. Angeline, and later Owen, Fred and Mike are pseudonyms of focus group participants and adapted for this paper. Off-set dialogue constitutes actual quotes from focus group data; character descriptions are comparable to the demographics of lay sample participants.