Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Med Decis Making. Author manuscript; available in PMC 2009 January 4.
Published in final edited form as:
PMCID: PMC2613489

Health Decision Making: Lynchpin of Evidence-Based Practice

Bonnie Spring, PhD, ABPP


Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. The evidence-based practice process requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative. Yet, the literature is largely silent about how to accomplish integrative, shared decision making. Implications for evidence-based practice are discussed for 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action). Three suggestions are offered. First, it would be advantageous to have theory-based algorithms that weight and integrate the 3 data strands (evidence, resources, preferences) in different decisional contexts. Second, patients, not providers, make the decisions of greatest impact on public health, and those decisions are behavioral. Consequently, theory explicating how provider-patient collaboration can influence patient lifestyle decisions made miles from the provider's office is greatly needed. Third, although the preponderance of data on complex decisions supports a computational approach, such an approach to evidence-based practice is too impractical to be widely applied at present. More troublesomely, until patients come to trust decisions made computationally more than they trust their providers’ intuitions, patient adherence will remain problematic. A good theory of integrative, collaborative health decision making remains needed.

Keywords: decision making, decision theory, evidence-based practice, evidence-based medicine, clinical competence, practice guidelines clinical psychology

Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. Systematic reviews and practice guidelines are well-developed cornerstones of evidence-based practice, as are instructional practices to teach critical appraisal. Full evidence-based decision making, however, requires integrating the evidence with consideration of practical resources and patient preferences and doing so via a process that is genuinely collaborative.1 Yet the literature on evidence-based practice is mostly silent about how to accomplish integrative, shared decision making. With so little known, there is great need for theory that characterizes evidence-based decision making either normatively or descriptively. Here, I consider 2 theories of clinician decision making (expected utility and fuzzy trace) and 2 theories of patient health decision making (transtheoretical model and reasoned action).2-4 I suggest that although these theories do shed light on decisional processes, theory will offer limited help for evidence-based practice until it connects the decisional processes of the provider with those of the patient.


Behavioral scientists and medical professionals have partnered to study medical decision making for more than 30 years.5,6 Enormous progress has been made in developing infrastructure (e.g., online information resources, practice guidelines, decision support systems, professional competency standards) to ground health decision making more firmly on research. So, how are we doing in research to practice translation? By most appraisals, not well. The Institute of Medicine notes that a chasm persists between what we know scientifically and what we apply to health care practice.7 One estimate is that uptake of new medical discoveries into clinical practice still only proceeds at the rate of 14% uptake after 17 years.8 The average American receives only 50% of recommended preventive, acute, and long-term health care.9 Clearly, the body of research knowledge exerts too little influence on clinical practice.10,11

Why don't clinicians apply the evidence? Are they unaware of it? Do they find research evidence invalid or inapplicable? Why would a practitioner choose to engage in anything other than evidence-based practice? The answers to these questions prove to be complex.

Many impediments curtail day-to-day implementation of best practices.12,13 Unfamiliarity with current research evidence is part of the problem.14-16 The glacial rate of conversion of research-derived knowledge into practice results, in part, from the fact that clinicians have trouble keeping up with the exponentially proliferating research evidence base.15 Dissemination of practice guidelines has been marginally useful for helping keep practitioners up to date,17 but guidelines have failed to be a panacea for several reasons.18 First, guidelines are mixed in the consistency with which they weight research evidence over clinical consensus—the “eminence-based practice”19 that systematic reviews were designed to supplant. Also, the multiplicity and constant evolution of guidelines overloads clinical decision makers and creates new challenges, such as “dueling” (conflicting) guidelines.20

The range of available well-validated algorithmic decision support tools remains quite limited.21 Even when available, actuarial methods and research-validated treatments are rarely used in clinical practice.21-23 Additional top-down encouragement of evidence-based practice is emerging from payers and insurers. It seems unlikely, however, that any top-down nomothetic approach will close fully the chasm between current practice and evidence-based practice.

Practitioners also cite concern about research relevance as a barrier to implementing evidence-based practices.22 They worry about whether treatments developed in different contexts can be expected to work for their own settings, populations, and clinical skills.24 That is an appropriate concern but also one that can be pressed too far. An opinion being voiced with increasing frequency is that nothing can be inferred about a treatment's utility until a trial has been conducted with the exact target sociocultural population and context of interest.25 Pressed to an extreme, an unwillingness to generalize any aspect of the research evidence base deprives marginalized, understudied populations of access to evidence-based treatment. It would do a great disservice to disparate populations if, in the interim while data are being collected, clinical lore and local custom were seen as the sole basis to determine their care.

A fundamental challenge for a nomothetically guided practice approach is that health decision making occurs in a context that involves considerations beyond just the research evidence base.26 To support shared decision making and adherence, the patient's unique characteristics and circumstances need to be taken into account, and his or her values and preferences need to be engaged. The impact of resource considerations also looms large, including whether accessible practitioners are trained to perform the procedures best supported by research evidence and whether there is institutional support and funds to pay for treatment. To address these complexities, Sackett and others27 proposed the first “3 circles model” of evidence-based practice. Other variants have followed.28-41 All define evidence-based practice as involving the integration of 3 data sources: best research evidence, resources including clinical expertise, and patient values, characteristics, state, circumstances, and preferences.1,27-33 But how, exactly, is that integration to be accomplished?


Decisions “are the acts that turn information into action.”42 No matter whether the health condition is medical and life threatening or psychological and quality of life threatening; no matter whether the research evidence is robust or lacking, the need to make health decisions is inescapable.34-36 Uncertainty nearly always enters the equation.

Given its importance, one might expect to find coursework on decision making at the core of every health profession's training curriculum. Yet, with some exceptions, course offerings on clinical decision making are in short supply.28,37-39 And if doctoral-level training in decision making is scarce, postgraduate continuing education offerings in decision making are rarer still. Croskerry's survey6 of career emergency medicine physicians is illustrative. When asked how important these emergency physicians found decision making to their practice, 100% said “very important.” But only 3% read the journal Medical Decision Making, and only 20% had read a book or article on decision making in the past 5 years.

In this author's opinion, the greatest gap in the armamentarium of resources available to support evidence-based practice is guidance about how to perform evidence-based decision making. Interestingly, coverage of formal decision analysis comprised the first half of Sackett, Haynes, and Tugwell's early text40 on clinical epidemiology. Critical appraisal comprised the second half of the book.41 Over time, critical appraisal came to represent the core teaching thrust of evidence-based medicine, and decision analysis assumed a more peripheral position. To lay groundwork for evidence-based care, we must first disseminate general training in appraisal skills and build a nomothetic research infrastructure (e.g., syntheses, synopses, summaries, guidelines). But the ultimate goal of those investments in training and infrastructure development remains critical. The aim is to help practitioners apply the evidence to make patient care decisions—a process that is neither intuitively self-evident nor best left to chance. The evidence-based practice movement faces many challenges but none more central than addressing its original goal: to support decision making in a manner that integrates evidence, patient preferences, and resource considerations.

To pave the road toward evidence-based decision-making, we need to learn more about complex decisions that are the staples of clinical care. We need a knowledge base that informs optimal decision making to initiate, alter, and stop treatment; prioritize treatment when comorbidities are present; determine whether to treat multimorbidities simultaneously or sequentially; and judge how to integrate medical and behavioral treatments. To add further interest and complexity, we need to learn how to engage the patient into the decision-making process. It no longer suffices to make health decisions correctly in accordance with the research literature and our appraisal of the patient's circumstances. For care to be collaborative, it matters as well that patients genuinely participate throughout the decision-making process. As noted by Street43 and Epstein and Street,44 collaborative care requires the preconditions of communication, comprehension, and trust.43 But good communication will not, in and of itself, guarantee good decision making. The challenge of integrating evidence, patient preferences, and resources still remains.


Does behavioral science theory tell us how to make good, integrative health decisions? Thus phrased, the question is prescriptive: it asks how to bring actual human decision making into closer accord with a normative ideal.45,46 According to expected utility theory,47,48 the normative ideal is an idealized, fully informed, entirely rational decision maker who computes with perfect accuracy to make the choice that maximizes subjective expected value. We can hold no illusions that clinicians ordinarily make perfectly rational decisions.49 They do not. Like other humans, they apply cognitive heuristics that simplify but also distort the decision maker's appraisal of information.49 The goal of evidence-based medicine is to curtail such biased decisional processes by substituting rational computation.

A radically different (antagonistic) view is offered by Valerie Reyna's fuzzy trace theory,3,46,50,51 however. Evidence-based medicine assumes that decisions made by computation are inevitably superior to those made by intuition. Fuzzy trace theory assumes the opposite: that intuitive processing is more sophisticated and more capable of making decisions that are in context.


Fuzzy trace theory posits that people form 2 kinds of memory representations (verbatim and gist), and they rely chiefly on the fuzzier, less precise gist to reason and make decisions. At first glance, fuzzy trace theory appears to be a descriptive theory, but closer scrutiny reveals fuzzy trace theory's normative aspect. Because cognitive development and increasing expertise are both associated with increasing reliance on gist processing, fuzzy trace theory casts intuitive processing as the apex of development.

There is agreement that decision making by medical experts often does rely on intuitive gist processing and pattern recognition.22,52 But disagreement arises about whether that is a good or a bad thing. To strong proponents of evidence-based medicine, intuitive decision making by experts illustrates exactly the bad state of affairs that evidence-based practice was designed to remediate. Isaacs and Fitzgerald19 call such practice “eminence-based medicine,” characterized by making the same mistakes with increasing confidence over an impressive number of years.

The contrary premise of fuzzy trace theory is that intuitive processing is to be admired rather than denigrated as mere clinical opinion. Fuzzy trace theory suggests that in high-stakes situations, experienced physicians benefit from using intuitive decision making rather than a more a deliberative strategy.22,53 In such circumstances, experts approach decisions via recognition primed pattern matching and choose a course of action immediately, without weighing alternatives. Paring away detail enables clinical data to be processed in parallel and decisional processes to be partially automatized. Disattention to nonessential information leaves spare capacity to be allocated flexibly if important new information arises.


The evidence is mixed regarding whether intuitive processing results in good decisions for patients. It suggests that intuitive processing works well when the decision involves simple pattern matching42,52 or when information can only be obtained at great cost.54 However, it also suggests that intuitive processing works poorly in situations that involve less costly data of uncertain validity.54 A fundamental problem, however, is that the evidence base is derived almost entirely from diagnostic decision making in internal medicine.5

To find an evidence base on more complex, sequential clinical management decisions, we turn to the extensive literature that characterizes decision making in clinical psychology. Those research results consistently show that computational decisions outperform intuitive ones.23,55-58 When making predictions freely, psychologists tend to perceive too many extraneous conditions as exceptions to the rules.59 Robin Dawes57 concludes that experts in clinical psychology are good at determining what variables should be in a prediction formula, and they are also good at assessing those variables. However, the kinds of decisions needed in psychological practice are too complex to be made intuitively.

It can be argued that the decisional context in medicine differs so greatly from that in psychology that no generalization can be drawn. Indeed, there are important differences, including that medical practice usually entails much greater time pressure. In both psychology and medicine, however, a preponderance of research fails to show a beneficial effect of experience on decisions about patient care.42,52,57,60 As disappointing and puzzling as that observation is, the findings support a systematic, deliberative, computational approach to complex decision making over and above an approach based solely on intuition and experience.

Several challenges remain, however. First, it is not feasible to compute analyses in real time for most clinical decisions. Second, too few decision support systems exist. Third, and most problematically, many patients find cold comfort in the normative model endorsed by evidence-based medicine. At least currently, more patients trust and prefer the decisions made by their all too human doctors, as compared with more accurate and less biased decisions derived by a computer.61 Moreover, patients sometimes persuade providers to accede to their decisional preferences even when those contradict evidentiary best practice.13 Until medical decision-making theory can capture and integrate the mental models that both experts and patients hold about decisional best practices, we will have only half of the conceptualization needed to guide collaborative care.


Thus far, we have focused chiefly on clinicians’ decisions about whether to perform medical procedures. Clearly, though, in this era of shared decision making, patients hold a key place at the table. The move toward genuinely collaborative care reflects a belief that shared decision making enhances patient satisfaction and improves health outcomes.62,63 The patient-centered care movement also reflects certain inescapable realities. One is that patients’ decisions about lifestyle behaviors explain the lion's share of variance in whether they will fall ill or recover.64-68 Moreover, the likelihood that a treatment will be successful in any given case depends critically on whether the patient decides to accept or adhere to it.69,70 Thus, for both philosophical and practical reasons, the patient holds many cards in most health decisions, and his or her preferences need to be engaged. How individuals conceptualize and make decisions about their own health behaviors has been the topic of decades of research by Drs. Prochaska and Fishbein.62,63


For many clinicians, the transtheoretical model (TTM) offered a breakthrough for conceptualizing clients’ decision making about behavior change. Encoding as “precontemplator” rather than “liar” the patient who claims, without taking action, that he or she wants to quit smoking lessened the provider's frustration and eased doctor-patient communication.

Proactive v. reactive recruitment

The TTM helped catalyze an expansion of the manner in which behaviorally at-risk populations are recruited into intervention trials. Originally, most behavior change intervention studies recruited volunteer samples. The procedure was to develop what was usually a clinic-based treatment and to advertise for interested patients to participate. Consequently, almost the entire evidence base about successful behavioral treatments became based on samples of highly selected, well-motivated volunteers. Such samples represented only a small minority (1%−20%)71,72 of those who possessed the behavioral risk factor and needed intervention, raising questions about how well the efficacy of the developed treatments would generalize to less ideal, more typical contexts.

Current research on the TTM

Stage-of-change thinking has become an accepted, appreciated convention in clinical practice. Yet, as Whitelaw and others73 note, the need for critique may be greatest under such circumstances. Despite many trials, few findings indicate that stage-based interventions produce outcomes superior to nonstage-based ones.74 West75 reminds us that people can change their behavior with great suddenness and without evidence of prior motivated deliberation. Motivation to change appears fluid, and findings show scant evidence of sequential movement through discrete stages.76,77 Serious adverse consequences could result if implementing the TTM caused treatment to be withheld from precontemplators/contemplators who might benefit if treated.78 After all, environmental and policy changes (e.g., increased cigarette taxation, smoke-free workplaces) have prompted healthful behavior changes by even unmotivated individuals.79-84


Fishbein and Ajzen's theory of reasoned action (TRA) posits that a person's intention to perform a behavior is the best indicator of his or her motivational readiness to act.85,86 Intention is, in turn, determined by the person's attitude toward the specific behavior, subjective norms (beliefs about how significant others feel about the behavior), and self-efficacy (sense of personal control) about being able to engage in the behavior.

Encouraging healthful behavior change by individuals

Like the transtheoretical model, the theory of reasoned action has been widely applied in studies of health behavior change.87 The 2 theories exhibit some important differences in underlying assumptions, however. The TTM assumes that the stages and processes of change generalize and function in the same manner across many different behaviors.2 The TRA, in contrast, assumes that every behavior is different and has distinctive determinants. As Fishbein3 states, “from our perspective, one does not perform the same behavior in different contexts, but, instead performs different behaviors.” According to the TRA, for an intention to predict behavior, the intention must involve the same elements as the behavior itself: the same action, target, context, and time elements. Fishbein illustrates his point by presenting data that demonstrate a very different impact of the several main TRA constructs on such varied health behaviors as exercising, practicing safe sex, and obtaining a colonoscopy.

Promoting practitioner best practices

The TRA's emphasis on the power of intentions to predict behavior has led to an acceptance of goal setting as a technique to improve performance. The practice of goal setting has been widely adopted in both behavioral clinical practice and organizational management.88-90 Fishbein's paper offers some intriguing insights about goal setting. He suggests that the most effective interventions will be those directed at changing specific behaviors, rather than those directed at broader behavioral categories or goals. For example, he proposes that stating broad goals such as improving “quality of care” or “evidence-based medicine” is unlikely to enhance actual clinical practice. For such aspirations to have a positive effect, he argues, it is necessary to translate the over-arching goals into explicit, concrete behavioral intentions. To illustrate an intention specific enough to promote behavioral implementation in practice, he gives the example of recommending daily aspirin to diabetic patients older than age 40 (a procedure endorsed by many practice guidelines).91

Practice guidelines

Proponents of evidence-based practice guidelines strongly endorse Fishbein's point. The function of practice guidelines is to explicate exactly which specific health-promoting actions are sufficiently well supported by high-quality research evidence to be recommended as best practices for most people. Guidelines are a tool that translates the generalized exhortation to perform evidence-based practice into detailed recommendations regarding what specific assessment and intervention actions and policies are warranted. Guidelines exist for clinical specialty practices,92,93 primary care,94 and community or policy contexts.95

Decision making in a context

Just as the TRA reminds us of what is good about practice guidelines, the theory also suggests why guidelines will probably never, in and of themselves, be sufficient to entirely determine best practices. The reason, to repeat Fishbein, is that “one does not perform the same behavior in different contexts, but, instead performs different behaviors.”3 As Eddy42 notes, guidelines represent a nomothetic, top-down, average approach to evidence-based practice rather than an idiographic, bottom-up stance. The best guidelines, based on systematic research review, prescribe the best treatment for the average patient under usual conditions. Guidelines largely ignore the full range of the response distribution and neglect the reality that a patient only really cares about which treatment will work best for his or her particular N = 1.96 In actual practice, decision making to determine the best practice for a specific presenting problem depends integrally on the context. Even though guidelines endorse daily aspirin for the 40-year-old patient with diabetes, in certain contexts, aspirin prescription will not be the best practice. For example, aspirin will be actively contraindicated in contexts where the patient has hemophilia, a known allergy to aspirin, or active gastrointestinal bleeding.

A significant criticism of practice guidelines is that they offer little advice regarding how to contextualize best practices.28,52,97 Conversely, one strength of a more idiographic quantitative decision-analytic approach is its potential to integrate contextual information. The decisional tension between the nomothetic features of the evidence base and the idiographic contextualized features of particular cases may be the greatest single challenge faced by contemporary evidence-based practice.28


Health decision making is both the lynchpin and the least developed aspect of evidence-based practice. Trainees in evidence-based medicine learn a stepwise process1 whereby they ask questions, acquire the evidence, appraise it critically, apply the evidence, analyze the outcome, and adjust practice accordingly. Applying the evidence sounds simple enough. But application is the step in evidence-based practice that requires integration of all 3 circles: research evidence, resources, and patient characteristics and preferences. The triangulation does not spring full-born like Athena from the head of Zeus. Decisional algorithms are needed to weight and integrate the 3 data strands (evidence, resources, preferences). The decision process is complex enough when being performed from the perspective of one person—the clinician. Now consider that the weighting and sifting of elements need also to be recomputed from the patient's perspective. Moreover, collaboration (even negotiation) is needed to balance things out into a shared decision regarding which action (or watchful waiting) to choose.

Great need persists for a more thorough conceptualization of the decision-making processes needed to actually apply evidence and perform evidence-based practice. Different decisional contexts need to be spelled out, along with consideration of where they fall on the spectrum of appropriate fidelity v. adaptation of research evidence, or how to proceed when, as is often the case, evidence is lacking.

In creating needed theory to conceptualize shared decision making, it may make sense to begin, as some have,98 by drawing an analogy between provider-patient communication and a couple's relationship. In the long run, though, we should probably not kid ourselves into thinking that the provider holds too much sway over the patient's behavior outside the doctor's visit. Patients are continually and in real time making lifestyle decisions that exert greater impact on public health than those decisions discussed with the provider. We urgently need theory that explicates how to make the provider-patient collaboration stickier—how to influence patients to make healthful decisions when they are miles from the provider's office.

Finally, we need to recognize that the patient decisions of greatest importance for health are behavioral ones. Myriad daily choices about whether to engage in risky actions or practice health-promoting ones exert powerful effects on public health. The provider has a shot at influencing those individual health decisions. So do manufacturers, policy makers, insurers, payers, and other people and institutions in the patient's environment. A good theory of integrative, collaborative health decision making is needed to support evidence-based practice. We have our work cut out for us; the journey is a worthy one; the Society for Medical Decision Making is up to the task.


Supported in part by N01-LM-6-3512: Resources for Training in Evidence Based Behavioral Practice awarded by the National Institutes of Health (NIH) Office of Behavioral and Social Science Research to Dr. Spring at Northwestern University. Portions of this article were presented at the annual meeting of the Society of Medical Decision Making, Boston, Massachusetts, October 2006. The author expresses appreciation to Kristin Hitchcock for editorial, technical, and library assistance.


1. Council for Training in Evidence-Based Behavioral Practice Definition and Competencies for Evidence-Based Behavioral Practice (EBBP) Mar, 2008. Available from:
2. Prochaska JO. Decision making in the transtheoretical model of behavior change. Med Decis Making. In press. [PubMed]
3. Fishbein M. A reasoned action approach to health promotion. Med Decis Making. In press. [PMC free article] [PubMed]
4. Reyna VF. A theory of medical decision making and health: fuzzy-trace theory. Med Decis Making. In press. [PMC free article] [PubMed]
5. Norman G. Research in clinical reasoning: past history and current trends. Med Educ. 2005;39:418–27. [PubMed]
6. Croskerry P. The theory and practice of clinical decision-making. Can J Anesth. 2005;52:R1–8.
7. Institute of Medicine . Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy of Science Press; Washington, DC: 2001.
8. Balas EA, Boren SA. Yearbook of Medical Informatics: Managing Clinical Knowledge for Health Care Improvement. Schattauer Verlagsgesellschaft mbH; Stuttgart, Germany: 2000.
9. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–45. [PubMed]
10. Rohrbach LA, Grana R, Sussman S, Valente TW. Type II translation: transporting prevention interventions from research to real-world settings. Eval Health Prof. 2006;29:302–33. [PubMed]
11. Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299:211–3. [PubMed]
12. Westfall JM, Mold J, Fagnan L. Practice-based research: “Blue Highways” on the NIH Roadmap. JAMA. 2007;297:403–6. [PubMed]
13. Sekimoto M, Imanaka Y, Kitano N, Ishizaki T, Takahashi O. Why are physicians not persuaded by scientific evidence? A grounded theory interview study. BMC Health Serv Res. 2006;6:92. [PMC free article] [PubMed]
14. Pagoto SL, Spring B, Coups EJ, Mulvaney S, Coutu MF, Ozakinci G. Barriers and facilitators of evidence-based practice perceived by behavioral science health professionals. J Clin Psychol. 2007;63:695–705. [PubMed]
15. Slawson DC, Shaughnessy AF. Teaching evidence-based medicine: should we be teaching information management instead? Acad Med. 2005;80:685–9. [PubMed]
16. Spring B, Pagoto S, Whitlock E, et al. Invitation to a dialogue between researchers and clinicians about evidence-based behavioral medicine. Ann Behav Med. 2005;30:125–37. [PubMed]
17. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342:1317–22. [PubMed]
18. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–65. [PubMed]
19. Isaacs D, Fitzgerald D. Seven alternatives to evidence based medicine. BMJ. 1999;319:1618. [PMC free article] [PubMed]
20. PLoS Medicine Editors, editor. Drowning or thirsting: the extremes of availability of medical information. PLoS Med. 2006;3:e165. [PMC free article] [PubMed]
21. Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003;10:523–30. [PMC free article] [PubMed]
22. Lorenz KA, Ryan GW, Morton SC, Chan KS, Wang S, Shekelle PG. A qualitative examination of primary care providers’ and physician managers’ uses and views of research evidence. Int J Qual Health Care. 2005;17:409–14. [PubMed]
23. Marchese MD. Clinical versus actuarial prediction: a review of the literature. Percept Mo Skills. 1992;75:583–94. [PubMed]
24. Rothwell PM. External validity of randomized controlled trials: “to whom do the results of this trial apply?” Lancet. 2005;365:82–93. [PubMed]
25. Sue S, Zane N, Levant RF, et al. How well do both evidence-based practices and treatment as usual satisfactorily address the various dimensions of diversity? In: Norcross JC, Beutler LE, Levant RF, editors. Evidence-Based Practices in Mental Health: Debate and Dialogue on the Fundamental Questions. American Psychological Association; Washington, DC: 2006. pp. 329–74.
26. Guyatt G, Rennie D. Users’ Guides to the Medical Literature Essentials of Evidence-Based Clinical Practice. AMA Press; Chicago: 2002.
27. Sackett DL, Strauss SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Churchill Livingstone; New York: 2000.
28. Spring B. Evidence-based practice in clinical psychology: what it is; why it matters; what you need to know. J Clin Psychol. 2007;63:611–31. [PubMed]
29. Craig JV, Smyth RL. The Evidence-Based Practice Manual for Nurses. Elsevier Health Sciences; New York: 2002.
30. Gibbs L. Evidence-Based Practice for the Helping Professions. Brooks/Cole; Pacific Grove, CA: 2003.
31. Strauss SE, Richardson WS, Glasziou P, Haynes B. Evidence-Based Medicine: How to Practice and Teach EBM. 3rd ed. Elsevier; New York: 2005.
32. APA Presidential Task Force on Evidence-Based Practice Evidence-based practice in psychology. Am Psychol. 2006;61:271–85. [PubMed]
33. Policy Statement on Evidence-Based Practice in Psychology. American Psychological Association (APA); Washington, DC: 2005. [2007 Dec 4]. Task Force on Evidence-Based Practice [Internet] c2007. Available from:
34. Kaplan RM, Frosch DL. Decision making in medicine and health care. Ann Rev Clin Psychol. 2005;1:525–56. [PubMed]
35. Ghosh AK. On the challenges of using evidence-based information: the role of clinical uncertainty. J Lab Clin Med. 2004;144:60–4. [PubMed]
36. Scheidt S, Wenger N, Weber N. Uncertainty in medicine: still very much with us in 2004. Am J Geriatr Cardiol. 2004;13:9–10. [PubMed]
37. Braddock C, Bergen M. Contemporary Practice Curriculum. Stanford Faculty Development Center; Palo Alto, CA: 2004. Module 6: Shared Decision-Making: For Individuals.
38. Neville E. Competences for the foundation programme part 1: clinical decision making. BMJ Career Focus. 2005;331:210.
39. O'Donnell JF, Baron JA. A strategy to teach medical decision making within a medical school curriculum. J Cancer Educ. 1999;6(3):123–8. [PubMed]
40. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology a Basic Science for Clinical Medicine. Little, Brown; Boston: 1985.
41. Elstein AS, Fryback DG, Weinstein MC, et al. Presidential reflections on the 25th anniversary of the Society for Medical Decision Making. Med Decis Making. 2004;24:408–20. [PubMed]
42. Eddy DM. Evidence-based medicine: a unified approach. Health Aff (Millwood) 2005;24:9–17. [PubMed]
43. Street RL. Aiding medical decision making: a communication perspective. Med Decis Making. 2007;27:550–3. [PubMed]
44. Epstein RM, Street RL. Patient-Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. National Cancer Institute; Bethesda, MD: 2007. NIH Pub. No. 07–6225.
44. Edwards W. The theory of decision making. Psychol Bull. 1954;51:380–417. [PubMed]
45. Reyna V, Farley F. Risk and rationality in adolescent decision making: implications for theory, practice and public policy. Curr Dir Psychol Sci. 2006;7:1–44. [PubMed]
46. von Neumann J, Morgenstern O. Theory of Games and Economic Behavior. Princeton University Press; Princeton, NJ: 1947.
47. Cohen B. Is expected utility theory normative for medical decision making? Med Decis Making. 1996;16:1–6. [PubMed]
48. Kahneman D, Slovic P, Tversky A, editors. Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press; Cambridge, UK: 1982.
49. Reyna V, Brainerd C. Fuzzy-trace theory and false memory: new frontiers. J Exper Child Psychol. 1998;71:194. [PubMed]
50. Reyna V. How people make decisions that involve risk: a dual processes approach. Curr Dir Psychol Sci. 2004;13:60–6.
51. Eddy DM. Successes and challenges of medical decision-making. Health Aff (Millwood) 1986;5:108–15. [PubMed]
52. Arkes H, Dawes R, Christensen C. Factors influencing the use of a decision rule in a probabilistic task. In: Dowie J, Elstein A, editors. Professional Judgement: A Reader in Clinical Decision Making. Cambridge University Press; Cambridge, UK: 1988. pp. 163–80.
53. Politser P. Decision analysis and clinical judgment: a re-evaluation. Med Decis Making. 1981;1:361–89. [PubMed]
54. Meehl PE. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. University of Minneapolis Press; Minneapolis: 1955.
55. Chapman LJ, Chapman JP. Genesis of popular but erroneous psychodiagnostic observations. J Abnorm Psychol. 1967;72:193–204. [PubMed]
56. Dawes RM. House of Cards: Psychology and Psychotherapy Built on Myth. Free Press; New York: 1994.
57. Dawes RM, Faust D, Meehl PE. Clinical versus actuarial judgment. Science. 1989;243:1668–74. [PubMed]
58. Grove W. Clinical versus statistical prediction: The contribution of Paul Meehl. J Clin Psychol. 2005;61:1233–43. [PubMed]
59. Meehl PE. When shall we use our heads instead of the formula? J Couns Psychol. 1957;4:268–73.
60. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of care. Ann Intern Med. 2005;142:260–73. [PubMed]
61. Promberger M, Baron J. Do patients trust computers? J Behav Dec Making. 2006;19:455–68.
62. Gravel K, Légaré F, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: a systematic review of health professionals’ perceptions. Implement Sci. 2006;1:16. [PMC free article] [PubMed]
63. Greenfield S, Kaplan S, Ware JE., Jr. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102:520–8. [PubMed]
64. McGinnis JM, Foege WH. Actual causes of death in the United States. JAMA. 1993;270:2207–12. [PubMed]
65. Mokdad AH, Marks JS, Stroup DF, Gerberding JL. Actual causes of death in the United States, 2000. JAMA. 2004;291:1238–45. [PubMed]
66. Mozaffarian D, Marfisi R, Levantesi G, et al. Incidence of new-onset diabetes and impaired fasting glucose in patients with recent myocardial infarction and the effect of clinical and lifestyle risk factors. Lancet. 2007;370:667–75. [PubMed]
67. Nkondjock A, Robidoux A, Paredes Y, Narod SA, Ghadirian P. Diet, lifestyle and BRCA-related breast cancer risk among French-Canadians. Breast Cancer Res Treat. 2006;8:285–94. [PubMed]
68. Williams R, Barefoot J, Schneiderman N. Psychosocial risk factors for cardiovascular disease: more than one culprit at work. JAMA. 2003;290:2190–2. [PubMed]
69. Burke LE, Dunbar-Jacob JM, Hill MN. Compliance with cardiovascular disease prevention strategies: a review of the research. Ann Behav Med. 1997;19:239–63. [PubMed]
70. Epstein LH, Cluss PA. A behavioral medicine perspective on adherence to long-term medical regimens. J Consult Clin Psychol. 1982;50:950–71. [PubMed]
71. Fiore MC, Smith SS, Jorenby DE, Baker TB. The effectiveness of the nicotine patch for smoking cessation: a meta-analysis. JAMA. 1994;271:1940–7. [PubMed]
72. Velicer WF, Fava JL, Prochaska JO, Abrams DB, Emmons KM, Pierce JP. Distribution of smokers by stage in three representative samples. Prev Med. 1995;24:401–11. [PubMed]
73. Whitelaw S, Baldwin S, Bunton R, Flynn D. The status of evidence and outcomes in stages of change research. Health Ed Res. 2000;15:707–18. [PubMed]
74. Riemsma RP, Pattenden J, Bridie C, et al. Systematic review of the effectiveness of stage based interventions to promote smoking cessation. BMJ. 2003;326:1175–7. [PMC free article] [PubMed]
75. West R. Time for a change: putting the transtheoretical (stages of change) model to rest. Addiction. 2005;100:1036–9. [PubMed]
76. Hughes JR, Keely JP, Fagerstrom KO, Callas PW. Intentions to quit smoking change over short periods of time. Addict Behav. 2005;30:653–62. [PubMed]
77. Littrell JH, Girvin H. Stages of change: a critique. Behav Modif. 2002;26:223–73. [PubMed]
78. Pisinger C, Vestbo J, Borch-Johnsen K, Jørgensen T. It is possible to help smokers in early motivational stages to quit: the Inter99 study. Prev Med. 2005;40:278–84. [PubMed]
79. Shoda Y, Cervone D, Downey G. Persons in Context: Building a Science of the Individual. Guilford; New York: 2007.
80. Mischel W. Toward a cognitive social learning reconceptualization of personality. Psychol Rev. 1973;80:252–83. [PubMed]
81. Fichtenberg CM, Glantz SA. Effect of smoke-free workplaces on smoking behaviour: systematic review. BMJ. 2002;325:188. [PMC free article] [PubMed]
82. Frieden TR, Mostashari F, Kerker BD, Miller N, Hajat A, Frankel M. Adult tobacco use levels after intensive tobacco control measures: New York City, 2002−2003. Am J Public Health. 2005;95:1016–23. [PubMed]
83. Hanewinkel R, Isensee B. Five in a row—reactions of smokers to tobacco tax increases: population-based cross-sectional studies in Germany 2001−2006. Tob Control. 2007;16:34–7. [PMC free article] [PubMed]
84. Hyland A, Higbee C, Li Q, et al. Access to low-taxed cigarettes deters smoking cessation attempts. Am J Public Health. 2005;95:994–5. [PubMed]
85. Prochaska JO, Velicer WF. The transtheoretical model of health behavior change. Am J Health Promot. 1997;12:38–48. [PubMed]
86. Ajzen I, Fishbein M. Understanding Attitudes and Predicting Social Behavior. Prentice Hall; Englewood Cliffs, NJ: 1980.
87. Dishman RK, Saunders RP, Felton G, Ward DS, Dowda M, Pate RR. Goals and intentions mediate efficacy beliefs and declining physical activity in high school girls. Am J Prev Med. 2006;31:475–83. [PubMed]
88. Armstrong MA. Handbook of Human Resource Management Practice. 10th ed. Kogan Page; London: 2006.
89. Latham GP. The reciprocal effects of science on practice: insights from the practice and science of goal setting. Can Psychol. 2001;42:1–11.
90. Wood W, Neal DT. A new look at habits and the habit-goal interface. Psychol Rev. 2007;114:843–63. [PubMed]
91. US Preventive Services Task Force Aspirin for the primary prevention of cardiovascular events: recommendation and rationale. Ann Intern Med. 2002;136:157–60. [PubMed]
92. American Heart Association [Internet] Scientific Statements and Practice Guidelines Topic List. American Heart Association; Dallas, TX: [2007 Dec 4]. c2007. Available from:
93. American Diabetes Association (ADA) Standards of medical care in diabetes: V. Diabetes care. Diabetes Care. 2007;30(suppl 1):S8–15.
94. US Preventive Services Task Force (USPSTF) [Internet] Guide to Clinical Preventive Services. Agency for Healthcare Research and Quality; Rockville, MD: [2007 Dec 4]. Available from:
95. Guide to Community and Preventive Services [Internet] The Community Guide. Centers for Disease Control and Prevention; Atlanta, GA: [2007 Dec 4]. Available from:
96. Tugwell P, Robinson V, Grimshaw J, Santesso N. Systematic reviews and knowledge translation. Bull World Health Organ. 2006;84:643–51. [PubMed]
97. Anderson LM, Brownson RC, Fullilove MT, et al. Evidence-based public health policy and practice: promises and limits. Am J Prev Med. 2005;28(supp l):226–30. [PubMed]
98. Légaré F, Elwyn G, Fishbein M, et al. Translating shared decision-making into health care clinical practices: proof of concepts. Implement Sci. 2008;3:2. [PMC free article] [PubMed]