Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Consum Psychol. Author manuscript; available in PMC 2014 January 1.
Published in final edited form as:
J Consum Psychol. 2013 January 1; 23(1): 154–157.
Published online 2012 October 12. doi:  10.1016/j.jcps.2012.10.004
PMCID: PMC3685847

Choice theories: What are they good for?[star]


Simonson et al. present an ambitious sketch of an integrative theory of context. Provoked by this thoughtful proposal, I discuss what is the function of theories of choice in the coming decades. Traditionally, choice models and theory have attempted to predict choices as a function of the attributes of options. I argue that to be truly useful, they need to generate specific and quantitative predictions of the effect of the choice environment upon choice probability. To do this, we need to focus on rigorously modeling and measuring the underlying processes causing these effects, and use the Simonson et al. proposal to provide some examples. I also present some examples from research in decision-making and decision neuroscience, and argue that models that fail, and fail spectacularly are particularly useful. I close with a challenge: How would consumer researcher aid the design of real world choice environments such as the health exchanges under the Patient Protection and Affordable Care Act?

Keywords: Choice models, Context effects, Choice architecture, Process tracing, Process models


What should a choice model do?

At one time, the job of choice theories was simple: They simply needed to predict, given a set of attributes, what choices would be made. Now the demands on accounts of choice are much greater, for several reasons:

  • As noted by Simonson et al., the range of possible influences on choice has increased. While the values of options on attributes clearly contribute to choice, other factors have come to the fore, in theory, and particularly in the choice sets commonly studied: Those with few options, most of which are on the efficient frontier. The authors bravely attempt to provide a framework that could include many of these influences, but the list is getting long and unruly.
  • Many of these factors have concrete real-world effects. For example, the selection of defaults has a marked influence on outcomes that affect lives (Johnson & Goldstein, 2003) and pension savings (Carroll, Choi, Laibson, Madrian, & Metrick, 2009; Madrian & Shea, 2001; Thaler & Benartzi, 2004). The order of attributes and their sorting (Lynch & Ariely, 2000), and even whether or not alternatives appear on the 1st or 2nd screen of a web site, have real marketplace influences. Increasingly, it appears that value maximization considerations are far from sufficient to predict what is chosen, and that even the simple goal of prediction requires the incorporation of context.
  • But more importantly, these factors are particularly important given that many important choices are not made with the alternatives in front of the decision-maker, but instead on some abstraction like a webpage, or more quaintly, a mail-order catalog. These choice environment have been termed a marketplace of the artificial (Johnson, Bellman, Lohse, & Mandel, 2005) or choice engines. These choice engines allow us to display information in many ways, unbounded by the physical product, providing greater latitude in what Thaler and Sunstein call choice architecture (see Johnson et al., 2012 for a recent review; Thaler & Sunstein, 2008). This creates a new role for choice modeling: Advising a choice architect who has many degrees of freedom, and the potential for great impact upon choice and can be, in some cases, independent of the values of the options. Thus, choice theories must not only need to tell us what is being chosen, but be able to predict the effect of many of these design decisions. If choice modeling is to be truly useful, it needs to embrace these challenges.


In this paper, Simonson et al. bravely take on part this task, trying to unify a number of diverse results in the judgment and decision-making literature using a circumscribed yet psychologically plausible set of constructs. To do this, they make two major moves:

  • They posit that the comparison of alternatives on attributes is an essential component of choice.
  • They argue that decisions are often made by attending to the subset of potential comparisons, neglecting others, and propose some preliminary ideas about the mechanisms that could be used to determine which comparisons are salient.

The first tactic, focusing on comparisons of options on attributes, puts them in good company, part of a long tradition and judgment and decision-making research. These comparisons models historically date back at least to the additive differences model (Tversky, 1969), and its progeny: majority confirming dimensions (Russo & Dosher, 1983), etc. More recent examples of similar ideas includes the work of Gonzalez-Vallejo (González-Vallejo, 2002; González-Vallejo, Reid, & Schiltz, 2003), and Brandstäter et al. (Brandstätter, Gigerenzer, & Hertwig, 2006).

While attribute-wise comparisons are potentially quite fruitful, and produce choices with increased cognitive ease, they face their own challenges and any choice theory needs to respond to both challenges:

  • What kind of information is produced by the comparison? This could range by simply noting ordinal information, identifying which alternative is better on the attribute, to encoding interval or even ratio differences in utility, consistent with a model like additive differences.
  • How are these comparisons integrated across attributes? Some differences are larger than others, and some attributes are more important than others. At one extreme, these differences could be ignored: One could simply count the number of winners (Alba & Marmorstein, 1987). At the other extreme, one can weigh the differences in utility, producing a model that can, in the aggregate, be indistinguishable from value maximization (Tversky, 1969).

The second insight, that not all comparisons are salient, is very important, and quite challenging: The task of identifying what information presented the decision is daunting, and, as the authors observe, seems to be composed of both goal driven (top-down) and data-driven (bottom-up) influences. The authors start this hard work by sketching a two-factor model, but it would be useful to specify a parametric stand on both these issues, which would, of course, require a more formal model.

Where do we go from here? Process models still deserve process data

Given the demands presented by these new uses of choice models how should we proceed? Let me say that the road will be particularly daunting if we find ourselves relying on the paradigm of manipulating independent variables and observing choices, a paradigm that has dominated much recent consumer research. It has been argued that models that depend only on inputs and choices are not well specified and not easily falsified (Otter, Allenby, & Van Zandt, 2008; Ratcliff & Mckoon, 2008). Psychology offers a classic example: The debate concerning the nature of visual mental imagery. One side that the representation of visual images is described by the same code as the representation of language (Pylyshyn, 1973), and on the other side that the representation actually depicts the elements of an image (Kosslyn & Pomerantz, 1977). Anderson (1978) showed that such debates are, in themselves, fruitless because there is an unrestricted tradeoff between the properties of a representation and the complexity of the accompanying processes.

Progress in these debates, when it occurs, usually comes from the introduction of new constraints in the form of new data about either the process or representation. For example, showing that patterns of activation in the visual cortex corresponded to a pattern shown to respondents was a strong support for the image representation view (Kosslyn, Thompson, Kim, & Alpert, 1995). By producing theories that make predictions for characteristics of the choice process other than choice, we may produce models that are both more falsifiable and easily distinguished from one another. Since our toolbox of possible measures has increased markedly in the last two decades, perhaps our theories should embrace this richness.

This suggests a radical proposition: That models that fail, and fail spectacularly will best serve the enterprise of understanding choice. By this I suggest models that make very clear predictions, for multiple dependent measures, that can be cleanly tested. Just as the demand for choice models have grown to include the effect of many factors unrelated to value maximization by the options, the predictions made by choice models need to include characteristics other than observed choice.

Given that the model proposed by Simonson et al. emphasizes comparisons, it might be fitting to suggest that observing comparisons might be an essential component of effective model development. Observation of specific comparisons can easily be accomplished, either through eye tracking, a technology increasingly common in behavioral research, web-based information monitoring, verbal reports, or the use of a verification task. For example, there are now many studies that examine what Weber and Johnson (see also Brownstein, 2003 for a review; Weber & Johnson, 2009) call Decision by Distortion, the observation that attribute values are distorted in favor of the initial leader in choice. In Willemsen, Bockenholt, and Johnson (2011), we argue that the current leader serves as a reference alternative, and that comparisons of that reference alternative to other options should be more common, that is that current leaders, in the terms of Simonson et al., will become more salient comparisons. In that paper, we found that differences in attention not only predicted choices, but also partially mediated the effect.

Another example reinforces the observation that predicting choices alone may not help us identify the underlying ‘best’ model. Brandstätter et al. developed a model, the priority heuristic (Birnbaum, 2008; see also Brandstätter et al., 2006; Brandstätter, Gigerenzer, & Hertwig, 2008; Rieger & Wang, 2008), in which comparisons play a critical role. They report that the model predicts the choices made by various groups of respondents and items as well as, or better than, comparable theories, including prospect theory. As a process model, the theory is particularly well specified: It suggests that people first compare the minimal gain of an alternative, if one option is 10% greater than the other, then choice stops, and that option is selected. Choice proceeds by a series of similar sequential comparisons of other properties of the gamble. What is remarkable about the model is that it does not multiply probabilities by payoffs, yet predicts as well as models, such as prospect theory, that do.

Because it makes such clear predictions about search, the model is eminently testable, to see if the underlying cognitive process correspond to theory. Unfortunately, the model's predictions about what is examined does not correspond to what is observed in a computer based information acquisition study (Johnson, Schulte-Mecklenbeck, & Willemsen, 2008).

Fig. 1 shows the critical result as an icon graph, a convenient way of displaying information acquisition data (see Willemsen & Johnson, 2010 for details and an introduction to process tracing analysis). The arrows indicate, by their length, the frequency of each comparison, while the width of the boxes indicates how long each box was examined, and the height the frequency of looking at the box. The box on the right is a legend showing the size of each unit. Note that while there are many transitions of the outcomes (The W's in the graph) with their probabilities (the P's), there are virtually none comparing the payoffs as predicted by the proposed heuristic. This result suggests that the specific form of processing assumed by the priority heuristic may not account for the data, and that a model that somehow weights the outcomes by the probabilities might be more useful. This process analysis thus, closes the door on one class of models, but opens another. The implications for the Simonson et al. proposal are straightforward: By observing the frequency of such transitions we could directly observe the central construct of the model and estimate, directly, the latitude of acceptance curve.

Fig. 1
Comparisons in risky choice.

A similarly well specified set of models, not yet having a strong influence on the marketing literature are the termed, generically, stimulus-sampling model (examples include Johnson & Busemeyer, 2005; Ratcliff & Mckoon, 2008; Roe, Busemeyer, & Townsend, 2001; Usher & Mcclelland, 2004). These models make very specific predictions about several characteristics of the choice process. Not only do they predict choice probabilities, but also the time required to make a choice, the distribution of attention, the probabilities of transitions among the characteristics of the options, and, most tellingly, some models (specifically see Krajbich & Rangel, 2011; Krajbich, Armel, & Rangel, 2010) suggest that the last acquisition should always be on the last option. Such models are likely to generate inconsistent data quickly, and that inconsistent data can be used to modify these models.


The very applicability and relevance of work in choice creates new challenges: the definition of a useful model is changing, from simply mapping attributes to choices to that of an active partner in the design of choice engines. If we are to be relevant and useful, our models must move beyond the prediction choice as a function of the attributes, but take steps consistent with the ideas of Simonson et al., and explain how different non-attribute related elements of the choice situation will affect choice through a generally applicable mechanism.

Imagine, for example, a firm or policy maker designing the web site that might be used to present alternative health insurance policies to buyers. Clearly, price and quality will affect choices, but so will many of the choices that will be made in the site's design: How many options should be offered? How should they be sorted? What attributes should be on the first page or only available after a click? Should the site precalculate expected costs? Should there be a default option? Should choices be presented as a hierarchy (first pick a deductible, then be presented with plans)? How should that hierarchy be organized (quality or cost first)? Should this display be the same for all decision-makers or customized? There have been some attempts to answer these questions, but they usually emphasize empirical answers, not choice models. (Hanoch, Wood, Barnes, Liu, & Rice, 2011; Hibbard, Slovic, Peters, Finucane, & Tusler, 2001; Johnson, Baker, Hassin, Bajger, & Treur, 2012).

As a field, we have had significant success in demonstrating that all these things can have a great impact upon choice. The challenge, as stated by Simonson et al., is to develop a common framework that would lead to good advice.


The author thanks Elke Weber for helpful suggestions and comments.


[star]NIA grant R01AG027934-04S1 has supported preparation of this manuscript.


  • Alba JW, Marmorstein H. The effects of frequency knowledge on consumer decision making. The Journal of Consumer Research. 1987;14(1):14–25.
  • Anderson JR. Arguments concerning representations for mental-imagery. Psychological Review. 1978;85(4):249–277.
  • Birnbaum MH. Evaluation of the priority heuristic as a descriptive model of risky decision making: Comment on Brandstätter, Gigerenzer, and Hertwig (2006) Psychological Review. 2008;115(1):253–262. [PubMed]
  • Brandstätter E, Gigerenzer G, Hertwig R. The priority Heuristic: Making choices without trade-offs. Psychological Review. 2006;113(2):409–432. [PMC free article] [PubMed]
  • Brandstätter E, Gigerenzer G, Hertwig R. Risky choice with heuristics: Reply to Birnbaum (2008), Johnson, Schulte-Mecklenbeck, and Willemsen (2008) and Rieger and Wang (2008) Psychological Review. 2008;115(1):281–289. [PubMed]
  • Brownstein AL. Biased predecision processing. Psychological Bulletin. 2003;129(4):545–568. [PubMed]
  • Carroll GD, Choi JJ, Laibson D, Madrian BC, Metrick A. Optimal defaults and active decisions. Quarterly Journal of Economics. 2009;124(4):1639–1674. [PMC free article] [PubMed]
  • González-Vallejo C. Making trade-offs: A probabilistic and context-sensitive model of choice behavior. Psychological Review. 2002;109(1):137–154. [PubMed]
  • González-Vallejo C, Reid AA, Schiltz J. Context effects: The proportional difference model and the reflection of preference. Journal of Experimental Psychology Learning, Memory, and Cognition. 2003;29(5):942–953. [PubMed]
  • Hanoch Y, Wood S, Barnes A, Liu P, Rice T. Choosing the right medicare prescription drug plan: The effect of age, strategy selection, and choice set size. Health Psychology. 2011;30(6):719–727. [PubMed]
  • Hibbard J, Slovic P, Peters E, Finucane M, Tusler M. Is the informed-choice policy approach appropriate for Medicare beneficiaries? Health Affairs. 2001;20:199–203. [PubMed]
  • Johnson EJ, Baker T, Hassin R, Bajger A, Treur G. The Value of Choice Architecture. Working Paper, Columbia Business School; Columbia University: 2012. Can Consumers Make Affordable Care Affordable? [PMC free article] [PubMed]
  • Johnson EJ, Bellman S, Lohse GL, Mandel N. Designing marketplaces of the artificial: Four approaches to understanding consumer behavior in electronic environments. Journal of Interactive Marketing. 2005;20(1):21–33.
  • Johnson J, Busemeyer J. A dynamic, stochastic, computational model of preference reversal phenomena. Psychological Review. 2005;112(4):841–861. [PubMed]
  • Johnson EJ, Dellaert BGC, Fox C, Goldstein DG, Haubl G, Larrick RP, et al. Beyond nudges: Tools of a choice architecture. Marketing Letters. 2012;23(2):487–504.
  • Johnson EJ, Goldstein D. Science. Vol. 302. New York, NY: 2003. Do defaults save lives? pp. 1338–1339. [PubMed]
  • Johnson EJ, Schulte-Mecklenbeck M, Willemsen MC. Process models deserve process data: Comment on Brandstätter, Gigerenzer, and Hertwig(2006) Psychological Review. 2008;115(1):263–273. [PubMed]
  • Kosslyn SM, Pomerantz JR. Imagery, propositions, and the form of internal representations. Cognitive Psychology. 1977;9(1):52–76.
  • Kosslyn SM, Thompson WL, Kim IJ, Alpert NM. Topographical representations of mental images in primary visual-cortex. Nature. 1995;378(6556):496–498. [PubMed]
  • Krajbich I, Armel C, Rangel A. Visual fixations and the computation and comparison of value in simple choice. Nature Neuroscience. 2010;13(10):1292–1298. [PubMed]
  • Krajbich I, Rangel A. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proceedings of the National Academy of Sciences of the United States of America. 2011;108(33):13852–13857. [PubMed]
  • Lynch J, Ariely D. Wine online: Search costs affect competition on price, quality, and distribution. Marketing Science. 2000;19(1):83–103.
  • Madrian BC, Shea DF. The power of suggestion: Inertia in 401(k) participation and savings behavior. Quarterly Journal of Economics. 2001;116(4):1149–1187.
  • Otter T, Allenby GM, Van Zandt T. An integrated model of discrete choice and response time. Journal of Marketing Research. 2008;45(5):593–607.
  • Pylyshyn ZW. What minds eye tells minds brain — Critique of mental imagery. Psychological Bulletin. 1973;80(1):1–24.
  • Ratcliff R, Mckoon G. The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation. 2008;20(4):873–922. [PMC free article] [PubMed]
  • Rieger MO, Wang M. What is behind the priority heuristic? A mathematical analysis and comment on Brandstätter, Gigerenzer, and Hertwig (2006) Psychological Review. 2008;115(1):274–280. [PubMed]
  • Roe RM, Busemeyer JR, Townsend JT. Multialternative decision field theory: A dynamic connectionist model of decision making. Psychological Review. 2001;108(2):370–392. [PubMed]
  • Russo JE, Dosher BA. Strategies for multiattribute binary choice. Journal of Experimental Psychology Learning, Memory, and Cognition. 1983;9:676–696. [PubMed]
  • Thaler R, Benartzi S. Save more tomorrow (TM): Using behavioral economics to increase employee saving. Journal of Political Economy. 2004;112(S1):S164–S187.
  • Thaler RH, Sunstein CR. Nudge: Improving decisions about health, wealth, and happiness. Yale University Press; 2008.
  • Tversky A. Intransitivity of preferences. Psychological Review. 1969;76(1):31–48.
  • Usher M, Mcclelland JL. Loss aversion and inhibition in dynamical models of multialternative choice. Psychological Review. 2004;111(3):757–769. [PubMed]
  • Weber E, Johnson E. Mindful judgment and decision making. Annual Review of Psychology. 2009;60:53–85. [PubMed]
  • Willemsen MC, Bockenholt U, Johnson EJ. Choice by value encoding and value construction: Processes of loss aversion. Journal of Experimental Psychology General. 2011;140(3):303–324. [PubMed]
  • Willemsen MC, Johnson EJ. Visiting the decision factory: Observing cognition with mouselabweb and other information acquisition methods. A Handbook of Process Tracing Methods for Decision Making. 2010:21–42.