|Home | About | Journals | Submit | Contact Us | Français|
Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. Implications for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers’ intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed.
Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. Systematic reviews and practice guidelines are well-developed cornerstones of evidence-based practice, as are instructional practices to teach critical appraisal. Full evidence-based decision making, however, requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative.1 Yet the literature on evidence-based practice is mostly silent about how to accomplish integrative, shared decision making. With so little known, there is great need for theory that characterizes evidence-based decision making either normatively or descriptively. Here, I consider 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action).2-4 I suggest that although these theories do shed light on decisional processes, theory will offer limited help for evidence-based practice until it connects the decisional processes of the provider with those of the patient.
Behavioral scientists and medical professionals have partnered to study medical decision making for more than 30 years.5,6 Enormous progress has been made in developing infrastructure (e.g., online information resources, practice guidelines, decision support systems, professional competency standards) to ground health decision making more firmly on research. So, how are we doing in research to practice translation? By most appraisals, not well. The Institute of Medicine notes that a chasm persists between what we know scientifically and what we apply to health care practice.7 One estimate is that uptake of new medical discoveries into clinical practice still only proceeds at the rate of 14% uptake after 17 years.8 The average American receives only 50% of recommended preventive, acute, and long-term health care.9 Clearly, the body of research knowledge exerts too little influence on clinical practice.10,11
Why don't clinicians apply the evidence? Are they unaware of it? Do they find research evidence invalid or inapplicable? Why would a practitioner choose to engage in anything other than evidence-based practice? The answers to these questions prove to be complex.
Many impediments curtail day-to-day implementation of best practices.12,13 Unfamiliarity with current research evidence is part of the problem.14-16 The glacial rate of conversion of research-derived knowledge into practice results, in part, from the fact that clinicians have trouble keeping up with the exponentially proliferating research evidence base.15 Dissemination of practice guidelines has been marginally useful for helping keep practitioners up to date,17 but guidelines have failed to be a panacea for several reasons.18 First, guidelines are mixed in the consistency with which they weight research evidence over clinical consensus—the “eminence-based practice”19 that systematic reviews were designed to supplant. Also, the multiplicity and constant evolution of guidelines overloads clinical decision makers and creates new challenges, such as “dueling” (conflicting) guidelines.20
The range of available well-validated algorithmic decision support tools remains quite limited.21 Even when available, actuarial methods and research-validated treatments are rarely used in clinical practice.21-23 Additional top-down encouragement of evidence-based practice is emerging from payers and insurers. It seems unlikely, however, that any top-down nomothetic approach will close fully the chasm between current practice and evidence-based practice.
Practitioners also cite concern about research relevance as a barrier to implementing evidence-based practices.22 They worry about whether treatments developed in different contexts can be expected to work for their own settings, populations, and clinical skills.24 That is an appropriate concern but also one that can be pressed too far. An opinion being voiced with increasing frequency is that nothing can be inferred about a treatment's utility until a trial has been conducted with the exact target sociocultural population and context of interest.25 Pressed to an extreme, an unwillingness to generalize any aspect of the research evidence base deprives marginalized, understudied populations of access to evidence-based treatment. It would do a great disservice to disparate populations if, in the interim while data are being collected, clinical lore and local custom were seen as the sole basis to determine their care.
A fundamental challenge for a nomothetically guided practice approach is that health decision making occurs in a context that involves considerations beyond just the research evidence base.26 To support shared decision making and adherence, the patient's unique characteristics and circumstances need to be taken into account, and his or her values and preferences need to be engaged. The impact of resource considerations also looms large, including whether accessible practitioners are trained to perform the procedures best supported by research evidence and whether there is institutional support and funds to pay for treatment. To address these complexities, Sackett and others27 proposed the first “3 circles model” of evidence-based practice. Other variants have followed.28-41 All define evidence-based practice as involving the integration of 3 data sources: best research evidence, resources including clinical expertise, and patient values, characteristics, state, circumstances, and preferences.1,27-33 But how, exactly, is that integration to be accomplished?
Decisions “are the acts that turn information into action.”42 No matter whether the health condition is medical and life threatening or psychological and quality of life threatening; no matter whether the research evidence is robust or lacking, the need to make health decisions is inescapable.34-36 Uncertainty nearly always enters the equation.
Given its importance, one might expect to find coursework on decision making at the core of every health profession's training curriculum. Yet, with some exceptions, course offerings on clinical decision making are in short supply.28,37-39 And if doctoral-level training in decision making is scarce, postgraduate continuing education offerings in decision making are rarer still. Croskerry's survey6 of career emergency medicine physicians is illustrative. When asked how important these emergency physicians found decision making to their practice, 100% said “very important.” But only 3% read the journal Medical Decision Making, and only 20% had read a book or article on decision making in the past 5 years.
In this author's opinion, the greatest gap in the armamentarium of resources available to support evidence-based practice is guidance about how to perform evidence-based decision making. Interestingly, coverage of formal decision analysis comprised the first half of Sackett, Haynes, and Tugwell's early text40 on clinical epidemiology. Critical appraisal comprised the second half of the book.41 Over time, critical appraisal came to represent the core teaching thrust of evidence-based medicine, and decision analysis assumed a more peripheral position. To lay groundwork for evidence-based care, we must first disseminate general training in appraisal skills and build a nomothetic research infrastructure (e.g., syntheses, synopses, summaries, guidelines). But the ultimate goal of those investments in training and infrastructure development remains critical. The aim is to help practitioners apply the evidence to make patient care decisions—a process that is neither intuitively self-evident nor best left to chance. The evidence-based practice movement faces many challenges but none more central than addressing its original goal: to support decision making in a manner that integrates evidence, patient preferences, and resource considerations.
To pave the road toward evidence-based decision-making, we need to learn more about complex decisions that are the staples of clinical care. We need a knowledge base that informs optimal decision making to initiate, alter, and stop treatment; prioritize treatment when comorbidities are present; determine whether to treat multimorbidities simultaneously or sequentially; and judge how to integrate medical and behavioral treatments. To add further interest and complexity, we need to learn how to engage the patient into the decision-making process. It no longer suffices to make health decisions correctly in accordance with the research literature and our appraisal of the patient's circumstances. For care to be collaborative, it matters as well that patients genuinely participate throughout the decision-making process. As noted by Street43 and Epstein and Street,44 collaborative care requires the preconditions of communication, comprehension, and trust.43 But good communication will not, in and of itself, guarantee good decision making. The challenge of integrating evidence, patient preferences, and resources still remains.
Does behavioral science theory tell us how to make good, integrative health decisions? Thus phrased, the question is prescriptive: it asks how to bring actual human decision making into closer accord with a normative ideal.45,46 According to expected utility theory,47,48 the normative ideal is an idealized, fully informed, entirely rational decision maker who computes with perfect accuracy to make the choice that maximizes subjective expected value. We can hold no illusions that clinicians ordinarily make perfectly rational decisions.49 They do not. Like other humans, they apply cognitive heuristics that simplify but also distort the decision maker's appraisal of information.49 The goal of evidence-based medicine is to curtail such biased decisional processes by substituting rational computation.
A radically different (antagonistic) view is offered by Valerie Reyna's fuzzy trace theory,3,46,50,51 however. Evidence-based medicine assumes that decisions made by computation are inevitably superior to those made by intuition. Fuzzy trace theory assumes the opposite: that intuitive processing is more sophisticated and more capable of making decisions that are in context.
Fuzzy trace theory posits that people form 2 kinds of memory representations (verbatim and gist), and they rely chiefly on the fuzzier, less precise gist to reason and make decisions. At first glance, fuzzy trace theory appears to be a descriptive theory, but closer scrutiny reveals fuzzy trace theory's normative aspect. Because cognitive development and increasing expertise are both associated with increasing reliance on gist processing, fuzzy trace theory casts intuitive processing as the apex of development.
There is agreement that decision making by medical experts often does rely on intuitive gist processing and pattern recognition.22,52 But disagreement arises about whether that is a good or a bad thing. To strong proponents of evidence-based medicine, intuitive decision making by experts illustrates exactly the bad state of affairs that evidence-based practice was designed to remediate. Isaacs and Fitzgerald19 call such practice “eminence-based medicine,” characterized by making the same mistakes with increasing confidence over an impressive number of years.
The contrary premise of fuzzy trace theory is that intuitive processing is to be admired rather than denigrated as mere clinical opinion. Fuzzy trace theory suggests that in high-stakes situations, experienced physicians benefit from using intuitive decision making rather than a more a deliberative strategy.22,53 In such circumstances, experts approach decisions via recognition primed pattern matching and choose a course of action immediately, without weighing alternatives. Paring away detail enables clinical data to be processed in parallel and decisional processes to be partially automatized. Disattention to nonessential information leaves spare capacity to be allocated flexibly if important new information arises.
The evidence is mixed regarding whether intuitive processing results in good decisions for patients. It suggests that intuitive processing works well when the decision involves simple pattern matching42,52 or when information can only be obtained at great cost.54 However, it also suggests that intuitive processing works poorly in situations that involve less costly data of uncertain validity.54 A fundamental problem, however, is that the evidence base is derived almost entirely from diagnostic decision making in internal medicine.5
To find an evidence base on more complex, sequential clinical management decisions, we turn to the extensive literature that characterizes decision making in clinical psychology. Those research results consistently show that computational decisions outperform intuitive ones.23,55-58 When making predictions freely, psychologists tend to perceive too many extraneous conditions as exceptions to the rules.59 Robin Dawes57 concludes that experts in clinical psychology are good at determining what variables should be in a prediction formula, and they are also good at assessing those variables. However, the kinds of decisions needed in psychological practice are too complex to be made intuitively.
It can be argued that the decisional context in medicine differs so greatly from that in psychology that no generalization can be drawn. Indeed, there are important differences, including that medical practice usually entails much greater time pressure. In both psychology and medicine, however, a preponderance of research fails to show a beneficial effect of experience on decisions about patient care.42,52,57,60 As disappointing and puzzling as that observation is, the findings support a systematic, deliberative, computational approach to complex decision making over and above an approach based solely on intuition and experience.
Several challenges remain, however. First, it is not feasible to compute analyses in real time for most clinical decisions. Second, too few decision support systems exist. Third, and most problematically, many patients find cold comfort in the normative model endorsed by evidence-based medicine. At least currently, more patients trust and prefer the decisions made by their all too human doctors, as compared with more accurate and less biased decisions derived by a computer.61 Moreover, patients sometimes persuade providers to accede to their decisional preferences even when those contradict evidentiary best practice.13 Until medical decision-making theory can capture and integrate the mental models that both experts and patients hold about decisional best practices, we will have only half of the conceptualization needed to guide collaborative care.
Thus far, we have focused chiefly on clinicians’ decisions about whether to perform medical procedures. Clearly, though, in this era of shared decision making, patients hold a key place at the table. The move toward genuinely collaborative care reflects a belief that shared decision making enhances patient satisfaction and improves health outcomes.62,63 The patient-centered care movement also reflects certain inescapable realities. One is that patients’ decisions about lifestyle behaviors explain the lion's share of variance in whether they will fall ill or recover.64-68 Moreover, the likelihood that a treatment will be successful in any given case depends critically on whether the patient decides to accept or adhere to it.69,70 Thus, for both philosophical and practical reasons, the patient holds many cards in most health decisions, and his or her preferences need to be engaged. How individuals conceptualize and make decisions about their own health behaviors has been the topic of decades of research by Drs. Prochaska and Fishbein.62,63
For many clinicians, the transtheoretical model (TTM) offered a breakthrough for conceptualizing clients’ decision making about behavior change. Encoding as “precontemplator” rather than “liar” the patient who claims, without taking action, that he or she wants to quit smoking lessened the provider's frustration and eased doctor-patient communication.
The TTM helped catalyze an expansion of the manner in which behaviorally at-risk populations are recruited into intervention trials. Originally, most behavior change intervention studies recruited volunteer samples. The procedure was to develop what was usually a clinic-based treatment and to advertise for interested patients to participate. Consequently, almost the entire evidence base about successful behavioral treatments became based on samples of highly selected, well-motivated volunteers. Such samples represented only a small minority (1%−20%)71,72 of those who possessed the behavioral risk factor and needed intervention, raising questions about how well the efficacy of the developed treatments would generalize to less ideal, more typical contexts.
Stage-of-change thinking has become an accepted, appreciated convention in clinical practice. Yet, as Whitelaw and others73 note, the need for critique may be greatest under such circumstances. Despite many trials, few findings indicate that stage-based interventions produce outcomes superior to nonstage-based ones.74 West75 reminds us that people can change their behavior with great suddenness and without evidence of prior motivated deliberation. Motivation to change appears fluid, and findings show scant evidence of sequential movement through discrete stages.76,77 Serious adverse consequences could result if implementing the TTM caused treatment to be withheld from precontemplators/contemplators who might benefit if treated.78 After all, environmental and policy changes (e.g., increased cigarette taxation, smoke-free workplaces) have prompted healthful behavior changes by even unmotivated individuals.79-84
Fishbein and Ajzen's theory of reasoned action (TRA) posits that a person's intention to perform a behavior is the best indicator of his or her motivational readiness to act.85,86 Intention is, in turn, determined by the person's attitude toward the specific behavior, subjective norms (beliefs about how significant others feel about the behavior), and self-efficacy (sense of personal control) about being able to engage in the behavior.
Like the transtheoretical model, the theory of reasoned action has been widely applied in studies of health behavior change.87 The 2 theories exhibit some important differences in underlying assumptions, however. The TTM assumes that the stages and processes of change generalize and function in the same manner across many different behaviors.2 The TRA, in contrast, assumes that every behavior is different and has distinctive determinants. As Fishbein3 states, “from our perspective, one does not perform the same behavior in different contexts, but, instead performs different behaviors.” According to the TRA, for an intention to predict behavior, the intention must involve the same elements as the behavior itself: the same action, target, context, and time elements. Fishbein illustrates his point by presenting data that demonstrate a very different impact of the several main TRA constructs on such varied health behaviors as exercising, practicing safe sex, and obtaining a colonoscopy.
The TRA's emphasis on the power of intentions to predict behavior has led to an acceptance of goal setting as a technique to improve performance. The practice of goal setting has been widely adopted in both behavioral clinical practice and organizational management.88-90 Fishbein's paper offers some intriguing insights about goal setting. He suggests that the most effective interventions will be those directed at changing specific behaviors, rather than those directed at broader behavioral categories or goals. For example, he proposes that stating broad goals such as improving “quality of care” or “evidence-based medicine” is unlikely to enhance actual clinical practice. For such aspirations to have a positive effect, he argues, it is necessary to translate the over-arching goals into explicit, concrete behavioral intentions. To illustrate an intention specific enough to promote behavioral implementation in practice, he gives the example of recommending daily aspirin to diabetic patients older than age 40 (a procedure endorsed by many practice guidelines).91
Proponents of evidence-based practice guidelines strongly endorse Fishbein's point. The function of practice guidelines is to explicate exactly which specific health-promoting actions are sufficiently well supported by high-quality research evidence to be recommended as best practices for most people. Guidelines are a tool that translates the generalized exhortation to perform evidence-based practice into detailed recommendations regarding what specific assessment and intervention actions and policies are warranted. Guidelines exist for clinical specialty practices,92,93 primary care,94 and community or policy contexts.95
Just as the TRA reminds us of what is good about practice guidelines, the theory also suggests why guidelines will probably never, in and of themselves, be sufficient to entirely determine best practices. The reason, to repeat Fishbein, is that “one does not perform the same behavior in different contexts, but, instead performs different behaviors.”3 As Eddy42 notes, guidelines represent a nomothetic, top-down, average approach to evidence-based practice rather than an idiographic, bottom-up stance. The best guidelines, based on systematic research review, prescribe the best treatment for the average patient under usual conditions. Guidelines largely ignore the full range of the response distribution and neglect the reality that a patient only really cares about which treatment will work best for his or her particular N = 1.96 In actual practice, decision making to determine the best practice for a specific presenting problem depends integrally on the context. Even though guidelines endorse daily aspirin for the 40-year-old patient with diabetes, in certain contexts, aspirin prescription will not be the best practice. For example, aspirin will be actively contraindicated in contexts where the patient has hemophilia, a known allergy to aspirin, or active gastrointestinal bleeding.
A significant criticism of practice guidelines is that they offer little advice regarding how to contextualize best practices.28,52,97 Conversely, one strength of a more idiographic quantitative decision-analytic approach is its potential to integrate contextual information. The decisional tension between the nomothetic features of the evidence base and the idiographic contextualized features of particular cases may be the greatest single challenge faced by contemporary evidence-based practice.28
Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. Trainees in evidence-based medicine learn a stepwise process1 whereby they ask questions, acquire the evidence, appraise it critically, apply the evidence, analyze the outcome, and adjust practice accordingly. Applying the evidence sounds simple enough. But application is the step in evidence-based practice that requires integration of all 3 circles: research evidence, resources, and patient characteristics and preferences. The triangulation does not spring full-born like Athena from the head of Zeus. Decisional algorithms are needed to weight and integrate the 3 data strands (evidence, resources, preferences). The decision process is complex enough when being performed from the perspective of one person—the clinician. Now consider that the weighting and sifting of elements need also to be recomputed from the patient's perspective. Moreover, collaboration (even negotiation) is needed to balance things out into a shared decision regarding which action (or watchful waiting) to choose.
Great need persists for a more thorough conceptualization of the decision-making processes needed to actually apply evidence and perform evidence-based practice. Different decisional contexts need to be spelled out, along with consideration of where they fall on the spectrum of appropriate fidelity v. adaptation of research evidence, or how to proceed when, as is often the case, evidence is lacking.
In creating needed theory to conceptualize shared decision making, it may make sense to begin, as some have,98 by drawing an analogy between provider-patient communication and a couple's relationship. In the long run, though, we should probably not kid ourselves into thinking that the provider holds too much sway over the patient's behavior outside the doctor's visit. Patients are continually and in real time making lifestyle decisions that exert greater impact on public health than those decisions discussed with the provider. We urgently need theory that explicates how to make the provider-patient collaboration stickier—how to influence patients to make healthful decisions when they are miles from the provider's office.
Finally, we need to recognize that the patient decisions of greatest importance for health are behavioral ones. Myriad daily choices about whether to engage in risky actions or practice health-promoting ones exert powerful effects on public health. The provider has a shot at influencing those individual health decisions. So do manufacturers, policy makers, insurers, payers, and other people and institutions in the patient's environment. A good theory of integrative, collaborative health decision making is needed to support evidence-based practice. We have our work cut out for us; the journey is a worthy one; the Society for Medical Decision Making is up to the task.
Supported in part by N01-LM-6-3512: Resources for Training in Evidence Based Behavioral Practice awarded by the National Institutes of Health (NIH) Office of Behavioral and Social Science Research to Dr. Spring at Northwestern University. Portions of this article were presented at the annual meeting of the Society of Medical Decision Making, Boston, Massachusetts, October 2006. The author expresses appreciation to Kristin Hitchcock for editorial, technical, and library assistance.