|Home | About | Journals | Submit | Contact Us | Français|
Simonson et al. present an ambitious sketch of an integrative theory of context. Provoked by this thoughtful proposal, I discuss what is the function of theories of choice in the coming decades. Traditionally, choice models and theory have attempted to predict choices as a function of the attributes of options. I argue that to be truly useful, they need to generate specific and quantitative predictions of the effect of the choice environment upon choice probability. To do this, we need to focus on rigorously modeling and measuring the underlying processes causing these effects, and use the Simonson et al. proposal to provide some examples. I also present some examples from research in decision-making and decision neuroscience, and argue that models that fail, and fail spectacularly are particularly useful. I close with a challenge: How would consumer researcher aid the design of real world choice environments such as the health exchanges under the Patient Protection and Affordable Care Act?
At one time, the job of choice theories was simple: They simply needed to predict, given a set of attributes, what choices would be made. Now the demands on accounts of choice are much greater, for several reasons:
In this paper, Simonson et al. bravely take on part this task, trying to unify a number of diverse results in the judgment and decision-making literature using a circumscribed yet psychologically plausible set of constructs. To do this, they make two major moves:
The first tactic, focusing on comparisons of options on attributes, puts them in good company, part of a long tradition and judgment and decision-making research. These comparisons models historically date back at least to the additive differences model (Tversky, 1969), and its progeny: majority confirming dimensions (Russo & Dosher, 1983), etc. More recent examples of similar ideas includes the work of Gonzalez-Vallejo (González-Vallejo, 2002; González-Vallejo, Reid, & Schiltz, 2003), and Brandstäter et al. (Brandstätter, Gigerenzer, & Hertwig, 2006).
While attribute-wise comparisons are potentially quite fruitful, and produce choices with increased cognitive ease, they face their own challenges and any choice theory needs to respond to both challenges:
The second insight, that not all comparisons are salient, is very important, and quite challenging: The task of identifying what information presented the decision is daunting, and, as the authors observe, seems to be composed of both goal driven (top-down) and data-driven (bottom-up) influences. The authors start this hard work by sketching a two-factor model, but it would be useful to specify a parametric stand on both these issues, which would, of course, require a more formal model.
Given the demands presented by these new uses of choice models how should we proceed? Let me say that the road will be particularly daunting if we find ourselves relying on the paradigm of manipulating independent variables and observing choices, a paradigm that has dominated much recent consumer research. It has been argued that models that depend only on inputs and choices are not well specified and not easily falsified (Otter, Allenby, & Van Zandt, 2008; Ratcliff & Mckoon, 2008). Psychology offers a classic example: The debate concerning the nature of visual mental imagery. One side that the representation of visual images is described by the same code as the representation of language (Pylyshyn, 1973), and on the other side that the representation actually depicts the elements of an image (Kosslyn & Pomerantz, 1977). Anderson (1978) showed that such debates are, in themselves, fruitless because there is an unrestricted tradeoff between the properties of a representation and the complexity of the accompanying processes.
Progress in these debates, when it occurs, usually comes from the introduction of new constraints in the form of new data about either the process or representation. For example, showing that patterns of activation in the visual cortex corresponded to a pattern shown to respondents was a strong support for the image representation view (Kosslyn, Thompson, Kim, & Alpert, 1995). By producing theories that make predictions for characteristics of the choice process other than choice, we may produce models that are both more falsifiable and easily distinguished from one another. Since our toolbox of possible measures has increased markedly in the last two decades, perhaps our theories should embrace this richness.
This suggests a radical proposition: That models that fail, and fail spectacularly will best serve the enterprise of understanding choice. By this I suggest models that make very clear predictions, for multiple dependent measures, that can be cleanly tested. Just as the demand for choice models have grown to include the effect of many factors unrelated to value maximization by the options, the predictions made by choice models need to include characteristics other than observed choice.
Given that the model proposed by Simonson et al. emphasizes comparisons, it might be fitting to suggest that observing comparisons might be an essential component of effective model development. Observation of specific comparisons can easily be accomplished, either through eye tracking, a technology increasingly common in behavioral research, web-based information monitoring, verbal reports, or the use of a verification task. For example, there are now many studies that examine what Weber and Johnson (see also Brownstein, 2003 for a review; Weber & Johnson, 2009) call Decision by Distortion, the observation that attribute values are distorted in favor of the initial leader in choice. In Willemsen, Bockenholt, and Johnson (2011), we argue that the current leader serves as a reference alternative, and that comparisons of that reference alternative to other options should be more common, that is that current leaders, in the terms of Simonson et al., will become more salient comparisons. In that paper, we found that differences in attention not only predicted choices, but also partially mediated the effect.
Another example reinforces the observation that predicting choices alone may not help us identify the underlying ‘best’ model. Brandstätter et al. developed a model, the priority heuristic (Birnbaum, 2008; see also Brandstätter et al., 2006; Brandstätter, Gigerenzer, & Hertwig, 2008; Rieger & Wang, 2008), in which comparisons play a critical role. They report that the model predicts the choices made by various groups of respondents and items as well as, or better than, comparable theories, including prospect theory. As a process model, the theory is particularly well specified: It suggests that people first compare the minimal gain of an alternative, if one option is 10% greater than the other, then choice stops, and that option is selected. Choice proceeds by a series of similar sequential comparisons of other properties of the gamble. What is remarkable about the model is that it does not multiply probabilities by payoffs, yet predicts as well as models, such as prospect theory, that do.
Because it makes such clear predictions about search, the model is eminently testable, to see if the underlying cognitive process correspond to theory. Unfortunately, the model's predictions about what is examined does not correspond to what is observed in a computer based information acquisition study (Johnson, Schulte-Mecklenbeck, & Willemsen, 2008).
Fig. 1 shows the critical result as an icon graph, a convenient way of displaying information acquisition data (see Willemsen & Johnson, 2010 for details and an introduction to process tracing analysis). The arrows indicate, by their length, the frequency of each comparison, while the width of the boxes indicates how long each box was examined, and the height the frequency of looking at the box. The box on the right is a legend showing the size of each unit. Note that while there are many transitions of the outcomes (The W's in the graph) with their probabilities (the P's), there are virtually none comparing the payoffs as predicted by the proposed heuristic. This result suggests that the specific form of processing assumed by the priority heuristic may not account for the data, and that a model that somehow weights the outcomes by the probabilities might be more useful. This process analysis thus, closes the door on one class of models, but opens another. The implications for the Simonson et al. proposal are straightforward: By observing the frequency of such transitions we could directly observe the central construct of the model and estimate, directly, the latitude of acceptance curve.
A similarly well specified set of models, not yet having a strong influence on the marketing literature are the termed, generically, stimulus-sampling model (examples include Johnson & Busemeyer, 2005; Ratcliff & Mckoon, 2008; Roe, Busemeyer, & Townsend, 2001; Usher & Mcclelland, 2004). These models make very specific predictions about several characteristics of the choice process. Not only do they predict choice probabilities, but also the time required to make a choice, the distribution of attention, the probabilities of transitions among the characteristics of the options, and, most tellingly, some models (specifically see Krajbich & Rangel, 2011; Krajbich, Armel, & Rangel, 2010) suggest that the last acquisition should always be on the last option. Such models are likely to generate inconsistent data quickly, and that inconsistent data can be used to modify these models.
The very applicability and relevance of work in choice creates new challenges: the definition of a useful model is changing, from simply mapping attributes to choices to that of an active partner in the design of choice engines. If we are to be relevant and useful, our models must move beyond the prediction choice as a function of the attributes, but take steps consistent with the ideas of Simonson et al., and explain how different non-attribute related elements of the choice situation will affect choice through a generally applicable mechanism.
Imagine, for example, a firm or policy maker designing the web site that might be used to present alternative health insurance policies to buyers. Clearly, price and quality will affect choices, but so will many of the choices that will be made in the site's design: How many options should be offered? How should they be sorted? What attributes should be on the first page or only available after a click? Should the site precalculate expected costs? Should there be a default option? Should choices be presented as a hierarchy (first pick a deductible, then be presented with plans)? How should that hierarchy be organized (quality or cost first)? Should this display be the same for all decision-makers or customized? There have been some attempts to answer these questions, but they usually emphasize empirical answers, not choice models. (Hanoch, Wood, Barnes, Liu, & Rice, 2011; Hibbard, Slovic, Peters, Finucane, & Tusler, 2001; Johnson, Baker, Hassin, Bajger, & Treur, 2012).
As a field, we have had significant success in demonstrating that all these things can have a great impact upon choice. The challenge, as stated by Simonson et al., is to develop a common framework that would lead to good advice.
The author thanks Elke Weber for helpful suggestions and comments.
NIA grant R01AG027934-04S1 has supported preparation of this manuscript.