Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2908414

Formats

Article sections

- Abstract
- Introduction
- Review of the Optimal Threshold Theory
- General Methods
- Experiment 1
- Experiment 2
- Empirical Test of Relationships between ER and DT
- Adjustments in Decision Thresholds
- General Discussion
- References

Authors

Related links

Q J Exp Psychol (Hove). Author manuscript; available in PMC 2010 July 22.

Published in final edited form as:

Q J Exp Psychol (Hove). 2010 May; 63(5): 863–891.

doi: 10.1080/17470210903091643PMCID: PMC2908414

NIHMSID: NIHMS141879

Correspondence to: Rafal Bogacz, Department of Computer Science, University of Bristol, Bristol, BS8 1UB, UK, Tel. +44-117-954-5141, Fax. +44-117-954-5208, Email: ku.ca.lotsirb@zcagoB.R

See other articles in PMC that cite the published article.

In this paper we investigate tradeoffs between speed and accuracy that are produced by humans when confronted with a sequence of choices between two alternatives. We assume that the choice process is described by the drift diffusion model, in which the speed-accuracy tradeoff is primarily controlled by the value of the decision threshold. We test the hypothesis that participants choose the decision threshold that maximizes reward rate, defined as an average number of rewards per unit of time. In particular, we test four predictions derived on the basis of this hypothesis in two behavioural experiments. The data from all participants of our experiments provide support only for some of the predictions, and on average the participants are slower and more accurate than predicted by reward rate maximization. However, when we limit our analysis to subgroups of 30-50% of participants who earned the highest overall rewards, all the predictions are satisfied by the data. This suggests that a substantial subset of participants do select decision thresholds that maximize reward rate. We also discuss possible reasons why the remaining participants select thresholds higher than optimal, including the possibility that participants optimize a combination of reward rate and accuracy or that they compensate for the influence of timing uncertainty, or both.

During decision making in natural environments, as well as in experimental settings, humans and animals can select to be either fast or accurate. When accuracy is emphasized, their decisions are slower, but when speed is emphasized, they make more mistakes. This phenomenon is known as the speed-accuracy tradeoff (Franks, Dornhaus, Fitzsimmons, & Stevens, 2003; Pachella, 1974; Wickelgren, 1977).

The existence of the speed-accuracy tradeoff can be explained within the framework of sequential sampling models of decision making (Busemeyer & Townsend, 1993; Laming, 1968; Ratcliff, 1978; Stone, 1960; Usher & McClelland, 2001; Vickers, 1970). In this paper we focus on one of these models, the drift diffusion model (DDM), which is a continuous version of the Sequential Probability Ratio Test (Wald & Wolfowitz, 1948), and has been shown to fit behavioural data from human choice tasks (Ratcliff, 2006; Ratcliff, Gomez, & McKoon, 2004; Ratcliff & Rouder, 2000; Ratcliff & Smith, 2004; Ratcliff, Thapar, & McKoon, 2003). The DDM describes choice between two alternatives. It makes the following assumptions: (i) The sensory evidence supporting the alternatives is noisy. (ii) During the decision process the difference between the evidence supporting the two alternatives is integrated over time. (iii) When this integrated difference reaches a certain positive or negative threshold value, the choice is made in favour of the corresponding alternative. In the DDM, the height of the threshold controls the speed-accuracy tradeoff: If the threshold is lower, it can be reached quicker, so decisions are faster, but they are less accurate because they are based on a smaller amount of noisy evidence. Conversely, if the threshold is higher, decisions are slower, but they are also more accurate because they are based on the integration of more evidence.

However, an important question remains unanswered: What decision thresholds do people select, especially when they are not explicitly told to emphasize speed or accuracy, or are encouraged to favor some ill-defined combination of both (as in many experimental tasks)? A number of theories addressing this question have been suggested (Busemeyer & Rapoport, 1988; Edwards, 1965; Mozer, Colagrosso, & Huber, 2002; Myung & Busemeyer, 1989; Rapoport & Burkheimer, 1971). Recently, Gold and Shadlen (2002) proposed that participants select the threshold that maximizes the reward rate (RR), defined as an average number of rewards per unit of time. How the RR can be maximized depends on the paradigm.

Gold and Shadlen (2002) considered the following experimental paradigm: a participant is presented with a choice stimulus and is free to indicate his/her choice at any time. After the response, there is a fixed interval *D* between the response and the onset of the next trial. If the participants' choice is correct, he/she receives a reward, and if it is an error, the response-stimulus interval *D* is increased by an additional penalty delay *D _{p}*. This paradigm is often used in animal studies of decision making, as it captures some key aspects of decision making in natural environments (Gold & Shadlen, 2002).

In the above paradigm, RR depends on threshold height. If the threshold is too low, the participant makes fewer correct choices and receives fewer rewards; if the threshold is too high, response durations are so long that the participant receives fewer rewards per unit of time (even if most choices are correct). As shown by Bogacz et al. (2006), there exists a unique threshold for the DDM that maximizes RR. The ability to select this optimal threshold presumably conveys an evolutionary advantage, providing an animal more rewards than its competitors.

In our previous theoretical work (Bogacz et al., 2006), we have shown how the optimal threshold depends on task parameters, such as choice difficulty and experimental delays *D*, *D _{p}*. We also made experimental predictions regarding error rates (ER) and reaction times (RT) that must hold if participants indeed select the threshold maximizing RR. This article describes experiments that test the following four predictions:

- The optimal threshold is higher for longer
*D*, and hence the participants should have lower ER and longer RT in blocks of trials with longer*D*. - The optimal threshold depends only on
*D*+*D*, rather than_{p}*D*and*D*individually, and hence ER and mean RT should be the same on blocks of trials with equal_{p}*D*+*D*, even if_{p}*D*and*D*themselves differ between blocks._{p} - The values of decision thresholds estimated by fitting the DDM to the distributions of RT should match the optimal values that maximize RR.
- The mean (normalized) RT, plotted as a function of ER, should follow a particular relationship defined by the optimal performance curve (described in detail in the next section).

In Bogacz et al. (2006), we also derived optimal thresholds under the assumptions that participants not only maximize RR, but also combinations of RR and accuracy. Here, we compare predictions of these analyses with experimental data.

This article is organized as follows. In the next section we review the DDM and the behavioural predictions following the assumptions that the participants select thresholds maximizing RR or combinations of RR and accuracy. Then we present two experiments and analyze their results based on all participants, showing that they provide partial support for the theoretical predictions. Participants, on average, set their thresholds higher than predicted by RR maximization. However, for the 30-50% of participants who achieved the highest reward over the entire experiment, all theoretical predictions are confirmed by the data. We also present data describing how quickly participants learn their thresholds, and then conclude with a general discussion of the experimental results and the relation of our work to other studies. Some initial results of this work were reported by Holmes et al. (2005).

We begin by briefly describing two versions of the DDM: *pure*, that captures the main features of the decision process and that is easier to analyse mathematically; and *extended*, that fits more details of behavioural data from choice tasks (for more complete reviews see Bogacz et al., 2006; Ratcliff & Smith, 2004). All symbols used are listed in Table 1 for reference.

Let *x*(*t*) denote the difference between evidence supporting the first and the second alternatives accumulated until time *t*. The pure DDM assumes that at the beginning of the decision process there is no bias towards either alternative, so *x*(0)=0, and that when the signal appears, *x*(*t*) is integrated according to the following equation (Ratcliff, 1978):

$$\mathit{\text{dx}}=\mathit{\text{Adt}}+\mathit{\text{cdW}}.$$

(1)

In Equation 1, *dx* denotes the change in *x* during a small time interval *dt*. This change in *x* includes two parts: constant drift *Adt* representing the average increase in *x* during interval *dt*, and noise *cdW* which has a normal distribution with mean 0 and variance *c*^{2}*dt*, reflecting the assumption that sensory evidence, internal processing, or both are noisy. The sign of *A* represents which alternative is correct. For simplicity we consider the case *A*>0 for which the first alternative is correct. The ratio of the parameters *A* and *c* represents how easy the task is (i.e., the signal-to-noise ratio).

The DDM assumes that as soon the value of *x* reaches a positive threshold *z* or a negative threshold −*z*, the choice is made in favour of the corresponding alternative. A trial is considered as correct if the threshold corresponding to the correct alternative (i.e., the upper threshold for *A*>0) is reached, and the trial is considered as an error when the other threshold is reached (due to noise). For the pure DDM, the expected value of ER is given by (Ratcliff, 1978):

$$\text{ER}=\frac{1}{1+{e}^{\frac{2Az}{{c}^{2}}}}.$$

(2)

The mean decision time (DT), defined as the mean time of integration before reaching a decision threshold, is given by (Ratcliff, 1978):

$$\text{DT}=\frac{z}{A}\text{tanh}\left(\frac{Az}{{c}^{2}}\right).$$

(3)

The DDM assumes that the RT is comprised of DT and an additional interval *T*_{0} due to non-decision processes (e.g., visual and motor), i.e.,

$$\text{RT}=\text{DT}+{T}_{0}.$$

(4)

The pure DDM assumes that *T*_{0} does not differ between trials, although it may differ among subjects. It is useful to note that if the parameters of the pure DDM: *A*, *c*, *z* are all scaled by the same constant, the ER and DT do not change. Thus instead of considering these parameters, it is simpler to consider their ratios. For simplicity of calculations we consider the following new parameters: a normalized threshold (which corresponds to DT with zero noise) and signal-to-noise ratio squared *ã:*

$$\stackrel{\sim}{z}=\frac{z}{A},\phantom{\rule{0.3em}{0ex}}\text{and}\phantom{\rule{0.5em}{0ex}}\stackrel{\sim}{a}={\left(\frac{A}{c}\right)}^{2}.$$

(5)

For these new parameters^{1} Equations 2 and 3 simplify to (Bogacz et al., 2006):

$$\text{ER}=\frac{1}{1+{e}^{2\stackrel{\sim}{a}\stackrel{\sim}{z}}},\text{and}\phantom{\rule{0.5em}{0ex}}\text{DT}\phantom{\rule{0.5em}{0ex}}=\text{}\stackrel{\sim}{z}\phantom{\rule{0.3em}{0ex}}\text{tanh}\phantom{\rule{0.1em}{0ex}}(\stackrel{\sim}{a}\stackrel{\sim}{z}).$$

(6)

The extended DDM differs from the pure model in three assumptions (Ratcliff & Smith, 2004). First, in the extended model *T*_{0} may differ between trials and it is assumed it comes from a uniform distribution with range [*T*_{0}-*s _{t}*,

Third, in the extended model the drift is not the same from trial to trial, but instead is chosen randomly on each trial from a normal distribution with mean *A* and standard deviation *s _{A}*. Trial-to-trial drift variability may reflect differences in difficulty or attention between trials. The last two forms of variability allow the extended DDM to account for the differences between RT on correct and error trials (Ratcliff & Rouder, 2000).

For the sequential choice task described in the Introduction, RR is given by the ratio of the fraction of correct trials and the average duration of the trial (Gold & Shadlen, 2002):

$$\text{RR}=\frac{1-\text{ER}}{\text{DT}+{T}_{0}+D+\text{ER}\cdot {D}_{p}}.$$

(7)

Let us first examine the predictions made by RR maximization, when it is assumed that the pure DDM provides a sufficient approximation of the decision process (we will return to the extended model below). To find the threshold maximizing RR, we substitute Equations 6 into 7, calculate the derivative of RR with respect to , and find that, for this derivative to vanish, the normalized threshold must satisfy:

$${e}^{2\stackrel{\sim}{z}\stackrel{\sim}{a}}-1=2\stackrel{\sim}{a}\left(D+{D}_{p}+{T}_{0}-\stackrel{\sim}{z}\right).$$

(8)

We refer to the solution of this transcendental equation as the *optimal* normalized threshold and denote it by * _{o}*. Although it does not admit an explicit solution in terms of elementary functions, Equation 8 has a unique solution (Bogacz et al., 2006) which forms a basis for the predictions outlined above, and labeled to remind the reader of their contents.

The optimal threshold increases as *D* lengthens. It follows intuitively that, as opportunities to receive reward become less frequent, the accuracy of each choice becomes more important. This may also be shown formally^{2}.

The optimal threshold depends only on *D+D _{p}*, rather than on

$${D}_{\mathit{\text{total}}}=D+{D}_{p}+{T}_{0}.$$

(9)

The optimal threshold * _{o}* satisfying Equation 8 can be found numerically (since it is known that there is only one) and directly compared against thresholds chosen by a participant. In order to perform such a comparison for experimental data collected for given

As we show in Appendix A, Equation 8 can be used to find the relationship between ER and DT that holds under the optimal threshold, to which we refer as the *optimal performance curve* (Bogacz et al., 2006). This relationship has the following form:

$$\frac{\text{DT}}{{D}_{\mathit{\text{total}}}}=f(\text{ER}).$$

(10)

The left side of the above equation expresses the ratio of the time in the trial used on decision processes to the maximum interval between the end of one decision process and the start of the next, while the right side is a function only of ER. Hence, the equation describes the relationship between ER and normalized DT as a fraction of *D _{total}*. This relationship, which contains only behavioral observables (DT and ER) and, in

Optimal performance curves. Horizontal axes show the error rate and vertical axes show the decision time (DT) normalized by total delay *D*_{total}. The thick line (identical in both panels) is the optimal performance curve for the reward rate (RR). The thin **...**

The above relationship should be satisfied for all values of signal to noise ratio *ã* and for all values of total delay *D _{total}*. Thus the left end of the curve corresponds to very easy tasks on which the participants should be both fast and accurate. The right end of the curve corresponds to the tasks which are so difficult that the strategy maximizing RR is to guess without integrating any evidence at all. The optimal performance curve predicts that the longest normalized DT should be observed for ER around 18%, at which point DT should be equal to approximately 20% of

While the pure DDM is more tractable to analysis than the extended DDM, as noted above the latter has been used to account for a wider range of empirical phenomena. Therefore, we also consider whether the four predictions described above hold if it is assumed that the decision process conforms to the extended DDM.

Numerical explorations of the extended DDM indicate that, like its pure form, the optimal threshold increases as delay *D* is lengthened.

Bogacz et al. (2006) have shown that, as for the pure DDM, the optimal threshold depends on *D _{total}* (rather than on

The optimal threshold for the extended DDM, though different than for the pure DDM, can also be found numerically (by finding the threshold maximizing Equation 7 ^{3}), yielding alternative versions of prediction *z*=*z _{o}*.

Unlike the pure DDM, an optimal performance curve independent of DDM parameters cannot be defined for the extended DDM. This is because the relationship between DT and ER depends on the variabilities *s _{x}*,

It has been suggested that while choosing the decision threshold, participants not only optimize RR but also accuracy (Maddox & Bohil, 1998). We review two criteria in which accuracy is explicitly included (Bogacz et al., 2006). In both we assume for simplicity that *D _{p}*=0. The first criterion is a weighted sum of RR and accuracy, which may be written as:

$$\text{RA}=\text{RR}-\frac{q}{{D}_{\mathit{\text{total}}}}\text{ER},$$

(11)

where *q* denotes the weight accorded to accuracy, and the second term is normalized by *D _{total}* to allow the units [1/time] to be consistent. The second criterion assumes that there is a penalty

$${\text{RR}}_{\text{m}}=\frac{(1-\text{ER})-q\text{ER}}{\text{DT}+{D}_{\mathit{\text{total}}}}.$$

(12)

Assuming that the decision process is well approximated by the pure DDM, optimal performance curves can be derived describing the relationships between DT and ER for optimizing the threshold for the RA and RR_{m} criteria (see Appendix A). These have the following form:

$$\frac{\text{DT}}{{D}_{\mathit{\text{total}}}}=f(\text{ER},q).$$

(13)

In both cases the right hand sides contain the additional weight parameter *q*, hence there exist families of optimal performance curves for different values of *q*. As *q* approaches 0, optimal performance curves for RA and RR_{m} converge to the optimal performance curve for RR, since in this case both criteria simplify to RR. These families are shown in Figures 1a and 1b. They are similar in that, for both criteria, DT for the optimal threshold is greater than for optimizing RR (this reflects the emphasis on accuracy). However, note also that these criteria differ in their predictions: For RA the value of *q* does not influence the ER corresponding to maximum DT (i.e., the position of the peak), while for RR_{m} increasing *q* moves the peak to the right. Furthermore, as *q* increases, the peak of the curve for RA becomes relatively narrower than for RR_{m}.

We note that the free parameter *q* in the criterion RA (or RR_{m}) can be adjusted so that *any single* combination of ER and DT (in a certain range) lies on the corresponding optimal performance curve. It is, however, not true that arbitrary *multiple combinations* of ER and DT must all lie on one such curve. Each criterion and choice of *q* predicts a curve with a particular shape, as shown in Figure 1, and multiple (ER, DT) data points may or may not follow this shape (e.g., if DT decreased monotonically across conditions as ER goes from 0 to 0.5, no curve could match the data). Hence these criteria must be assessed by comparing with ERs and DTs from multiple conditions and/or multiple participants, under the assumption that the participant(s) maximize the same criterion, with the same *q*, in all the conditions.

We present the results of two experiments designed to test whether human decision makers set their thresholds in an optimal manner. In both experiments participants performed sequences of choices between two alternatives, but the experiments differed in the stimuli used. In the first experiment participants discriminated the direction of movement of dots (stimuli often used in decision studies in monkeys, e.g., by Shadlen and Newsome (2001)). In the second experiment participants discriminated whether the fraction of an array occupied by stars was greater or less than 50% (these stimuli were previously used by Ratcliff et al. (1999)). Both experiments were approved by the Institutional Review Panel for Human Subjects of Princeton University. Below we describe aspects common to both experiments, and later in the Methods sections of Experiments 1 and 2 we only describe aspects specific to individual experiments^{5}.

These were adults recruited via announcements posted around the Princeton University campus. Participants were predominantly undergraduate and graduate students. Participants were paid 1 cent for each correct choice. To further increase motivation, participants were informed that the one who earned the most overall (in each group of 20 participants) would receive an additional prize of $100 at the end of the experiment. All participants expressed written consent for participation.

Participants were instructed to gain as many points as possible on a computerized two-alternative, forced-choice task by making correct choices. In each trial, a stimulus was presented and remained until the participant responded by pressing a corresponding key. After each correct response participants were informed by a short beep if the response was correct and a point was scored. No feedback was given following incorrect responses. After each response there was a delay *D* before presentation of the next stimulus (*D* was kept constant within each block, but varied across blocks). On some blocks (see below) an additional delay *D _{p}* was imposed after error responses.

The length of a block of trials was limited by fixing overall block duration – rather than by fixing the number of trials completed within it – in order to enforce the importance of the speed-accuracy tradeoff. Trials were blocked by delay condition. There were four delay conditions: (1) *D*=0.5s; (2) *D*=1s; (3) *D*=2s; and (4) *D*=0.5s and *D _{p}*=1.5s (in the first three conditions

After finishing the experiment, participants were asked to complete a questionnaire in which they rated (from 1 to 5) the difficulty of the experiment, the degree to which they were motivated by the one cent reward after each correct decision, and by the $100 prize. Participants were also asked if their strategies differed between blocks (answer yes or no), and if so to describe how their strategies differed.

These were 20 adults (9 males and 11 females; average age: 20 years).

We used the same stimuli that were used in other studies of decision making (e.g., Gold & Shadlen, 2003; Palmer, Huk, & Shadlen, 2005; Ratcliff & McKoon, 2008). The display was a field of randomly moving dots all of which appeared within a 5° circular aperture in the center of the screen. Dots were white squares 2 by 2 pixels (0.7° square) displayed against a black background, with a density of 16.7 dots/degree^{2}/s (6 dots per frame). On each trial, a fraction of the dots moved in a single direction over time, corresponding to that trial's correct direction, while the remaining dots were randomly repositioned over time. On each frame, 11% of the dots were independently selected as the coherently-moving dots, and were shifted 0.2 deg from their position for each 40 ms (3 video frames) elapsed, corresponding to a speed of 5 deg/sec (either leftward or rightward). The remaining dots were re-plotted in random positions on each frame. The display was generated in MATLAB on a Macintosh computer using the Psychophysics Toolbox extension (Brainard, 1997; Pelli, 1997) and software written by Joshua Gold.

Participants were asked to decide whether the prevailing motion of the dots was left or right, and had to indicate their responses by pressing “M” (rightward motion) or “Z” (leftward motion) on a standard keyboard (the mapping of keys to the right and left responses was not counterbalanced across participants). Participants were required to release the key after each response in order to initiate the next trial. The current score was displayed in the center of the screen during delay intervals.

The experiment consisted of 5 blocks, each lasting 7 minutes. One block of trials was run for each delay condition, except for condition *D*=2s. Two blocks were run the *D*=2s delay condition because a single 7-minute block yielded too few trials for analysis.

Predictions *z~D* and *z(D*+*D _{p})* stated in the Introduction were tested directly by comparing ERs and RTs on different conditions of Experiment 1 (see Figure 2). A oneway analysis of variance (ANOVA) on delay condition (with participant as a random effect) revealed significant differences between delay conditions in ERs (F(

Average error rate (ER) and reaction time (RT) in units of seconds for all twenty participants of Experiment 1 and all delay conditions. Delay conditions are indicated on x-axes – labels correspond to conditions (1) *D*=0.5s; (2) *D*=1s; (3) *D*=2s; **...**

In the first three conditions (*D*=0.5, *D*=1, *D*=2) there was a significant negative correlation between *D* and ER (after subtracting each participant's mean ER across conditions from their ER in each condition; r=−0.53, p<10^{-4}), and a significant positive correlation between *D* and RT (after subtracting each participant's mean RT; r=0.29, p=0.03). Since we know that the ER and RT of the DDM are monotonic functions of the decision threshold, we can infer from Figure 2 that, on average, in the first three conditions (*D*=0.5, *D*=1, *D*=2) the participants chose higher thresholds in the longer delay conditions. This pattern is qualitatively consistent with prediction *z~D*.

To investigate the prediction that participants should choose the same threshold in delay conditions 3 (*D*=2s) and 4 (*D*=0.5, *D _{p}*=1.5s so that

Testing predictions *z*=*z _{o}* and

There are at least two tenable approaches to estimating the parameters of the DDM: (i) Fit a separate DDM to each participant and delay condition; or (ii) Assume that certain parameters (e.g., the non-decision part of reaction time, *T*_{0}) do not vary across conditions for a given participant, and fit the DDM with the constraint that these fixed parameters are constant across conditions. If the assumption of approach (ii) is correct, then this more constrained approach will give better estimates of parameters. Therefore, we first estimated the parameters of the pure DDM using the unconstrained method (i) and identified those parameters that did not differ systematically across conditions; we then treated these as fixed parameters using the more constrained method (ii), as described below.

For each participant and condition we used the unconstrained method to estimate the following parameters: non-decision time *T*_{0}, signal to noise ratio *ã* and decision threshold (divided by drift rate)^{6}.

Following Ratcliff and Tuerlinckx (2002), we divide the RT distribution into five quantiles (in units of seconds): 0.1, 0.3, 0.5, 0.7, 0.9, and denote these by RT* _{q}^{th}* and RT

The subplex optimization algorithm (Rowan, 1990) was used to find parameters minimizing the cost function describing the weighted difference between ERs and RT distributions of the model and from the experiment (Ratcliff & Tuerlinckx, 2002):

$$\mathit{\text{Cost}}={w}_{\text{ER}}{({\text{ER}}^{\mathit{\text{th}}}-{\text{ER}}^{\mathit{\text{ex}}})}^{2}+\sum _{q=1}^{5}{w}_{\text{RT},q}\phantom{\rule{0.1em}{0ex}}{({\text{RT}}_{q}^{\mathit{\text{th}}}-{\text{RT}}_{q}^{\mathit{\text{ex}}})}^{2}.$$

(14)

In the above equation, *w*'s denote the weights of the fitted statistics. We choose the weight of a given statistic close to 1 / (the estimated variance of this experimental statistic), as proposed by Bogacz and Cohen (2004). In particular we take:
${w}_{\text{ER}}=\phantom{\rule{0.2em}{0ex}}\stackrel{n}{\phantom{\rule{0ex}{.6}}}/\underset{{\text{ER}}^{a\nu}\left(1-{\text{ER}}^{a\nu}\right)}{\phantom{\rule[--.8]{0ex}{0ex}}}$ and
${w}_{\text{RT},\text{q}}=\stackrel{n}{\phantom{\rule{0ex}{.6}}}/\underset{{\text{RT}}_{q}^{a\nu}}{\phantom{\rule[--.8]{0ex}{0ex}}}$, where ER* ^{aν}* and RT

Using the unconstrained method described above, we fitted the pure DDM to all conditions of all 20 participants. The ANOVA did not reveal significant differences in either *T*_{0} (F(*df*=3) = 0.7, p=0.55) or *ã* (F(*df*=3) = 0.92, p=0.44) between delay conditions. To further investigate the differences in *T*_{0} and *ã* between delay conditions, we performed paired t-tests between all pairs of conditions, and none of the tests were significant (p > 0.1 for all tests). Therefore, in the constrained estimation method we assumed that a given participant has the same *T*_{0} and *ã* for all conditions. Accordingly, for a given participant we estimated six parameters: non-decision time *T*_{0}, signal to noise ratio *ã*, and four normalized decision thresholds (one for each of the four delay conditions). We found parameters minimizing the following cost function (analogous to that of Equation 14).

$$\mathit{\text{Cost}}=\sum _{d=1}^{4}\left({w}_{\text{ER}}{({\text{ER}}_{d}^{th}-{\text{ER}}_{d}^{\mathit{\text{ex}}})}^{2}+\sum _{q=1}^{5}{w}_{\text{RT},q}{({\text{RT}}_{d,q}^{\mathit{\text{th}}}-{\text{RT}}_{d,q}^{\mathit{\text{ex}}})}^{2}\right).$$

(15)

In the above equation ER and RT denote the error rates and reaction time indexed additionally by the delay conditions. The parameters of the pure DDM found in this way were used in the analyses in the following sections.

To estimate parameters of the extended DDM from the data we used the DMAT toolbox (Vandekerckhove & Tuerlinckx, 2007) that finds parameters that maximize a (multinomial) likelihood function (Ratcliff & Tuerlinckx, 2002). In fitting the extended DDM we followed our treatment of the pure DDM by constraining all parameters except the decision threshold to be constant across conditions for any given participant.

Figure 3 compares the fits of the pure and the extended DDMs to data from a representative participant (see caption for the explanation of the quintile probability plots). This participant produced different RTs for error and correct trials (note that open circles are in different positions in the left and right parts of the panels). As could be expected this difference was captured by the extended DDM, but not by the pure DDM which always produces the same RT distribution for error and correct trials (note that lines in Figure 3a are symmetric). Nevertheless, the pure DDM was able to capture the speed-accuracy tradeoff produced by the participant: Note that the lines in the right part of Figure 3a are increasing, which indicates that as the accuracy increases, the RT also increases. Thus, the pure DDM was able to fit quite well this tradeoff on correct trials (constituting a great majority of trials).

Quantile probability plots showing the fits of the pure (a) and the extended (b) drift diffusion model (DDM) to behavioural data from a sample participant. Open circles correspond to the behavioural data. The horizontal axes indicate the probability of **...**

As could be expected, across the participants, there was a strong correlation between the parameter values estimated by fitting the pure DDM and the corresponding parameters estimated by fitting the extended DDM, as shown in Figure 4 (for *T*_{0}, r = 0.67; for *ã*, r = 0.61; for , r = 0.92). Such strong correlations have been reported before (Wagenmakers, van der Maas, & Grasman, 2007). Nevertheless, Figure 4 also shows that there are systematic differences between parameters, namely the pure DDM underestimates *ã* and overestimates , in agreement with observations of Wagenmakers et al. (2007)^{8}. Overall, however, our findings suggest that the simpler pure DDM is a useful tool in the analysis of the speed-accuracy tradeoff.

This is tested in Figure 5 which compares the estimated values of participants' thresholds (estimated from the DDM) with the optimal ones (found as described in Section Reward Rate Maximization), for the pure and extended DDM.

Comparison of participants' and optimal thresholds in Experiment 1. The left column of panels (a, c, e) compares estimates and predictions of the pure drift diffusion model (DDM), while the right column of panels (b, d, f) – of the extended DDM. **...**

Figures 5a and 5b show data for a sample participant, whose thresholds are very close to the optimum values. To quantify the similarity between the normalized participant's and optimal thresholds, we do not use Pearson's correlation as it can give high values even if the mean of one set of thresholds is higher than of another (as long as there is a linear relationship between them). Thus to test Prediction *z*=*z _{o}* Appendix B defines a correlation measure r

Figures 5c and 5d show the data for all 20 participants. While not all participants set their thresholds close to the optimal values, nevertheless there is a strong correlation between normalized participants' and optimal thresholds for the pure DDM: r_{1} = 0.44 (p < 10^{−5}), and for the extended DDM: r_{1} = 0.62, (p < 10^{−5})^{10}.

Figures 5e and 5f compare the participants' and optimal normalized thresholds for the four delay conditions, each averaged across all 20 participants. On average, participants appear to set their thresholds higher than the optimal values, an effect that is significant in the short delay conditions (*D*=0.5s and *D*=1s) for the pure DDM and in all conditions for the extended DDM.

Comparing the left and right columns of panels in Figure 5 suggests that fitting the pure and the extended DDM reveals very similar relationships between the participants' and optimal thresholds. This similarity can be traced to the strong correlation between those parameter values estimated by fitting the pure DDM and those estimated by fitting the extended DDM (Figure 4).

In summary, while the optimal threshold theory explains a significant proportion of the variance in participants' thresholds, on average they tend to set their thresholds to values higher than optimal for maximizing RR – especially in the short delay conditions. We will return to this observation below.

These were 60 adults (30 males and 30 females). In addition to payments described in General Methods, participants were also paid $8 for participation. Participants in Experiment 2 did not take part in Experiment 1.

Experiment 2 used the same stimuli that were used in a previous study of the DDM by Ratcliff et al. (1999). Participants were presented with a 10 × 10 grid in the upper left corner of a VGA monitor, subtending a visual angle of 4.30° horizontally and 7.20° vertically. Random cells within the grid were filled with asterisks; others were empty. On each trial, participants had to decide if the majority of locations in the grid were empty or filled with asterisks. The grid and asterisks appeared as light characters against a dark background, were presented with high brightness and contrast and were clearly visible. The VGA monitors were driven by a PC computer, and the stimuli were displayed using the Psychophysics Toolbox extensions (Brainard, 1997; Pelli, 1997). The number of displayed asterisks was either 40 or 60 during blocks referred to as *easy*, and 47 or 53 during blocks referred to as *difficult*.

Participants had to indicate their choices by pressing “M” or “Z” key on a standard keyboard, and the mapping of keys to the “low” and “high” responses was counterbalanced across participants. The score was continuously displayed in the center of the screen. In the first series of 20 participants, some participants held one of the buttons down continuously on some blocks of trials (which resulted in RT = 0, and ER close to 50%). To prevent such behavior, the remaining participants were required to release both buttons in order to initiate the next trial.

The experiment consisted of 10 blocks lasting 4 minutes each. For each delay condition there was an easy block and a difficult block, except for condition *D*=2s for which there were four blocks (two easy and two difficult) to collect a sufficient number of trials for analysis.

A two-way ANOVA (delay × difficulty) revealed that ER was significantly influenced by difficulty (F(*df*=1) = 103, p < 10^{-4}) and delay (F(*df*=3) = 3.79, p = 0.01), but not by their interaction (F(*df*=3) = 0.67, p = 0.55). Similarly, RT was significantly influenced by both difficulty (F(*df*=1) = 870, p < 10^{-4}) and delay (F(*df*=3) = 9.01, p < 10^{-4}), but not by their interaction (F(*df*=3) = 0.52, p = 0.62). Since there was no interaction between difficulty and delay conditions, in order to visualize the overall effect of delays, Figure 6 shows ER and RT averaged across difficulty conditions. The behavioral data from Experiment 2 follow the same pattern as in Experiment 1.

In the first three conditions (*D*=0.5, *D*=1, *D*=2) longer delay was associated with lower ER and higher RT (correlation between *D* and participants' ER after subtraction of mean ER for given participant and difficulty condition was r=−0.32, p<10^{-4}; correlation between *D* and participants' RT after subtraction of mean RT for given participant and difficulty condition was r=0.31, p<10^{-4}). Hence, in the first three conditions (*D*=0.5, *D*=1, *D*=2) the longer the delay, the higher threshold participants chose, in agreement with prediction *z~D*.

To test the prediction that participants should choose the same threshold in delay condition 3 (*D*=2s) and 4 (*D*=0.5, *D _{p}*=1.5s), we compared the ER and RT between these conditions. ER did not differ significantly (t(

We only fit the pure DDM to the data from Experiment 2 because the data were too few to constrain the extended DDM (involving three more parameters); there were fewer trials per condition in Experiment 2 than in Experiment 1 (4 minutes instead of 7 minutes). More importantly, many participants made very few or no errors in the easy conditions of Experiment 2 and information on RT distribution on error trials is necessary to accurately estimate the parameters of the extended DDM. We note, however, that the pure and extended DDM produced highly-correlated parameters (Figure 4), and they provided similar information on the relationship of participants' and optimal thresholds (observed in Figure 5). Hence we feel that it is worthwhile to use the pure DDM in the analysis of threshold setting in Experiment 2.

The parameters of the pure DDM were estimated using the same method as in Experiment 1. Namely, for every participant and difficulty condition we estimated: *T*_{0}, *ã* and four normalized decision thresholds (different values of both *T*_{0} and *ã* were estimated for each level of difficulty since these parameters differed significantly between difficulty conditions).

Figures 7a and 7b compare the estimated values of participants' normalized thresholds with the optimal ones. The correlation between the participants' and the optimal normalized thresholds is r_{1} = 0.67 (p < 10^{−5}) in the easy condition, and r_{1} = 0.25 (p = 10^{−4}) in the difficult condition.

Comparison of participants' and optimal thresholds in Experiment 2. Left panels (a and c) show findings for the easy condition, right panels (b and d) for the difficult condition. In panels a and b, participants' normalized thresholds are shown on horizontal **...**

Figures 7c and 7d compare the participants' and the optimal normalized thresholds averaged across all 60 participants. Again, participants tend to set their thresholds to higher than optimal values. The difference between participants' and optimal thresholds is significant in all conditions, except the easy condition in delay condition 4 *(D* = 0.5, *D _{p}* = 1.5).

Overall, the optimal threshold theory explains a significant proportion of variance in participants' thresholds, but participants consistently set their thresholds higher than the optimal values.

The match of empirical results to theory in Experiments 1 and 2 was short of perfect. Here, we explore factors that may have influenced how closely participants set their thresholds to the optimal value. We focus on the results of Experiment 2, for which there were enough participants to conduct reliable analyses.

To quantify the match between participants' thresholds and the theory, for each participant we compute a *theory match error* defined as the Euclidian distance between the participant's thresholds in the eight conditions of Experiment 2 and the optimal thresholds for those conditions:

$$E=\sqrt{\sum _{i=1}^{8}{({\stackrel{\sim}{z}}_{i}-{\stackrel{\sim}{z}}_{o,i})}^{2}}.$$

(16)

We found that this measure was not correlated with participants' responses to any of the questions in the debriefing questionnaire. However, it was strongly correlated (r = −0.71) with *reward score* (total amount of money earned), as shown in Figure 8a. This may not seem surprising: The theory identifies thresholds that maximize RR, and so those participants who approximate these thresholds should be expected to earn the highest rewards. The theory match error of Equation 16 was weakly dependent on gender (two tailed unpaired t-test t(*df*=58) = 2.13, p < 0.04), i.e. on average males chose thresholds closer to values maximizing RR than females. We will come back to a possible interpretation of this dependence in General Discussion.

Performance of participants with reward scores higher than the median reward score for Experiment 2. a) Relationship between theory match error (x-axis) and achieved reward score (y-axis). Each symbol corresponds to one participant; genders as indicated **...**

Let us now examine whether the performance of those participants who achieved reward scores above the median satisfies predictions *z(D*+*D _{p})* and

Figures 8b and 8c show the resulting ER and RT for each delay condition averaged over this subset of participants. Paired t-tests showed that ER and RT did *not* differ significantly between delay conditions 3 (*D* = 2s) and 4 (*D* = 0.5s, *D _{p}* = 1.5s) (ER: t(

Figures 8d and 8e show the mean (normalized) participants' and optimal thresholds, averaged across the same set of participants, for the easy and difficult conditions. In the easy condition, the mean participants' threshold does not differ significantly from optimal in *any* of the delay conditions (Figure 8d). In the difficult condition, the mean participants' threshold is significantly different from optimal only in delay condition *D* = 0.5 (Figure 8e). Thus, participants with higher reward scores set their thresholds much closer to the optimal value than other participants (compare with Figures 7c and 7d).

This describes the relationship between ER and DT that holds for optimal performance across various task parameters. Here we examine how well this relationship describes actual human performance. The relationship is expressed in Equation 10, which relates ERs to the normalized DT (i.e., DT as a fraction of *D _{total}*). Therefore, for both experiments, we calculated the normalized DT for each participant in each task condition as: (RT −

Optimal performance curves. In all panels except b, horizontal axes show the error rate (ER) and vertical axes show the normalized decision time (DT), i.e. DT divided by total delay *D*_{total}. Note the differences in scales between panels. Solid black curves **...**

For the majority of participants however, the match between the normalized DT and the optimal performance curve is not as good as shown in Figure 9a. For each participant of Experiment 2, we computed the correlation between the normalized DTs for 8 experimental conditions and the normalized DT predicted by the optimal performance curve. Figure 9b shows the histogram of these correlations. Although these correlations are usually positive, they are statistically significant only for 12 participants.

To evaluate how well the optimal performance curve describes the data averaged across participants, we divided the possible range of ER (0-50%) into ten equal intervals and, for each interval, calculated the mean normalized DT over all participants and conditions with ERs falling into that interval. The results of this analysis are shown in Figure 9c. Participants' mean normalized DTs are higher than those predicted by the optimal performance curve for RR (black curve in Figure 9c). This is consistent with the results of the analyses in the Section Decision Thresholds (see Figures 5e, 5f, 7c, and 7d) indicating that, in aggregate, participants set their thresholds higher than is optimal to earn maximum reward. Note, however, that the predicted relationship provides a good qualitative description of the relationship between ER and DT. In addition to showing the same inverted-U shape, participants have the highest normalized DT for blocks in which their ERs range from 15% to 20%, bracketing the predicted value of 18%.

As demonstrated above, there is a strong correlation between participants' reward scores and how closely their thresholds match the theoretical optima. Therefore, we compared the optimal performance curves with the data from participants with different levels of reward score separately. In particular, some participants with the lowest reward scores had DTs an order of magnitude higher than other participants, and thus we separately analyzed the 10% of participants with the lowest RRs^{12} (their DTs are shown in the right panel of Figure 9d). We split the remaining 90% of participants into 3 equal groups according to RR, but there was no significant difference in normalized DTs between the 30-60% and 60-90% groups (paired t-test across 10 bins of ER: p > 0.2), hence we pooled the data from these groups (constituting the middle 60% of participants). The normalized DTs for the remaining top 30% group and the middle 60% group are shown in left and middle panels of Figure 9d.

The left panel of Figure 9d indicates that the predictions of the optimal performance curve for RR match the data from the participants with reward scores in the top 30% quantitatively, and indeed their performance is statistically indistinguishable from the theoretical predictions except at one ER interval^{13}. The other two groups of participants (middle 60% and bottom 10%) shown in Figure 9d have significantly higher normalized DTs than predicted by RR maximization.

We investigated whether the optimal performance curves predicted by maximization of the RA and RR_{m} criteria (combining RR and accuracy) can quantitatively capture the relationship between the ER and DT of the middle 60% and the bottom 10% of participants. Both performance curves include a free parameter *q* describing the emphasis placed on accuracy. For both curves, *q* was estimated via the maximum likelihood method, using the standard assumption that normalized DTs are normally distributed around the performance curve being fitted with a variance estimated from the data. Additionally, in fitting the curve for RA, we constrained *q* ≤ 1.096, and in fitting the curve for RR_{m} we constrained *q* ≤ 1 (see Appendix A). The values of estimated parameters are given in Table 2. Note that the groups of participants with lower performance have higher estimated values of *q*, which could be interpreted as greater emphasis on accuracy.

The estimated values of parameter *q* of the optimal performance curves for RA and RR_{m} for three groups of participants sorted by reward score.

Figure 9e shows the best fitting performance curves for the three groups of participants. Both performance curves for RA and RR_{m} are able to describe the DTs of the middle 60% of participants. In case of the bottom 10%, the curve for RA fits the data better than the curve for RR_{m}, as the latter predicts that the longest normalized DT should be obtained for higher ERs, which is inconsistent with the data. These observations are quantified in Table 3, which lists the ratios of the likelihoods of the data given different curves. The first row shows that the data from each group of participants are more likely given the curve for RA than given the curve for RR_{m}, in particular the data from the bottom 10% are more that 43 times more likely given RA than RR_{m} curves. Table 3 also shows that the curves for RA and RR_{m} provide much better fit to the data than the curve for RR for the bottom 70% of participants.

Ratios of the likelihoods of experimental data given the pairs of criteria listed in the left columns for groups of participants listed in top row. To account for the fact that the performance curves for RA and RR_{m} have one free parameter and the curve **...**

In summary, although the relationship between ER and normalized DTs observed in the experiments for the middle and the bottom groups of participants cannot be described quantitatively by the optimal performance curve for RR, it can be described by the performance curve based on the assumption that participants maximize RA, i.e., the weighted difference between RR and ER. This suggests that the “conservatism” of these participants may reflect a greater emphasis on accuracy than is optimal for maximizing reward rate. In the General Discussion we consider alternative account of this apparent emphasis on accuracy.

In this brief section we investigate how quickly participants adjust their thresholds after block onset. Maximization of RR requires higher decision thresholds for longer *D*, and to visualize how participants adjust thresholds depending on *D*, Figure 10 shows differences between RTs in conditions *D* = 2 and *D* = 0.5. These differences are averaged over 5 trials in a given bin, over all blocks, and over all participants of Experiments 1 and 2. Figure 10a shows that RT differences develop and stabilize within the first ~20 trials for participants with reward scores above the median. RT differences for the remaining participants are shown in Figure 10b, but unfortunately the data is too noisy to identify when they stabilize. We will return to the issue of learning decision thresholds in the General Discussion.

In the Introduction we stated four predictions that should be satisfied if participants select thresholds maximizing RR. We performed two experiments testing these predictions. The support provided by experimental data for our predictions is summarized in Table 4. In general, the predictions were only partially consistent with the data from all participants, but virtually all of them were satisfied by data from the participants with highest reward scores.

Summary of the support for the predictions stated in the Introduction in the data from Experiments 1 and 2.

The prediction that the increase in response-stimulus interval *D* should increase the decision threshold, and thus decrease ER and increase RT, was satisfied in the behavioural data averaged over all participants in both experiments.

The prediction that the threshold (and thus ER and RT) should depend only on *D* + *D _{p}*, rather than

The prediction that the threshold estimated from fitting the DDM should be equal to the optimal threshold was only partially satisfied by the data from all participants. There was a significant correlation between the participants' and optimal thresholds, but in each of the experimental conditions the average participants' thresholds were higher than optimal. However, the estimated thresholds of participants of Experiment 2 with reward scores higher than the median was not significantly different from the optimal thresholds in 7 out of 8 experimental conditions.

The predicted relationship between DT and ER was only qualitatively satisfied by the experimental data from all participants, for whom the experimental DTs were higher than optimal. However, the data from the top 30% participants with highest reward scores satisfied the predicted relationship quantitatively.

Taken together, the above results suggest that, on average, participants set threshold higher than required to maximize RR, but that a significant proportion of participants indeed select a speed-accuracy tradeoff that approximately maximizes RR. One could ask how surprising it is that the participants with highest RR follow the predictions of our theory, since our theory assumes RR maximization. Note however, that the predictions were also based on the assumption that the choice process can be described by the DDM. Different models of information processing during choice would require different speed-accuracy tradeoffs to maximize RR. However, our data suggest that a significant proportion of participant select a decision threshold required by the DDM to maximize RR, thus our results also provide a support for the DDM.

It is important to note that predictions *z~D*, *z(D*+*D _{p})*,

It is interesting to ask why the predictions of the pure DDM matched well with the behaviour of many participants despite large systematic differences in the parameter values estimated by the pure and extended DDM (Figure 4). A possible reason may be that the maximization of RR under pure and extended DDM predicts similar speed-accuracy tradeoffs for given task parameters (difficulty and *D _{total}*) (Bogacz et al., 2006). We have previously shown that introducing the variability of drift slightly decreases DT predicted for given ER, while the variability of starting point increases the predicted DT (see Figure 14 in Bogacz et al., 2006). Since the effects of the two forms of variability on predicted DT are in opposite directions, it is possible that the speed-accuracy tradeoff predicted by the extended DDM is similar to that predicted when both forms of variability are ignored.

A parallel study (Simen, Buck, Hu, Holmes, & Cohen, submitted) has also tested some of the predictions of our previous theoretical work (Bogacz et al., 2006), but its focus was different. It tested predictions regarding tasks in which one of the alternatives is either more probable or more rewarded: in such situations accuracy needs to be sacrificed to maximize reward even to greater extent than in the paradigm considered here (Bogacz et al., 2006; Maddox, 2002). Simen et al. (submitted) also tested predictions *z~D* and *z*=*z _{o}*, but not predictions

Edwards (1965) proposed an alternative theory suggesting that participants choose decision thresholds that minimize the weighted sum RT + *q*ER (where *q* is a parameter determining the relative weight of ER, cf. Equations 11 and 12). Busemeyer and Rapoport (1988) designed an experiment in which the participants were explicitly required to minimize this cost function in order to maximize their rewards, and they found that the participants chose decision thresholds close to those minimizing this function. Their results and ours taken together suggest that participants are able to adjust their decision thresholds to increase reward in *various* experimental paradigms. Such an ability could come from an adaptive reinforcement learning mechanism aiming to maximize reward rate or reward in any task. We will return to this issue below.

We considered the possibility that some participants select thresholds higher than optimal because they maximize a weighted combination of RR and accuracy, and we showed that this theory is able to predict the shape of the relationship between ER and DT. This theory has been widely applied to data from perceptual discrimination tasks in which stimuli belonging to two classes differ on a continuous perceptual dimension (e.g., length of a line). In this context a discrimination threshold corresponds to a value on this dimension such that stimuli below and above this value belong to two different classes. If one of the alternatives is more rewarded, the discrimination threshold maximizing reward is different from that maximizing accuracy. The theory that assumes maximization of a combination of reward and accuracy quantitatively accounts for the discrimination threshold used by participants, and for the observed dependence of the discrimination threshold on accuracy (Maddox, 2002; Maddox & Bohil, 1998).

Other explanations for why participants select thresholds higher than optimal have also been suggested. Recently Zackenhouse et al. (submitted) investigated the assumption that participants are unable to precisely estimate the duration of the response-stimulus interval *D* and the value of signal to noise ratio *ã*, and analyzed predictions of four theories proposing that they seek to achieve performances robust to the uncertainties in *D* and *ã*. One of these theories assumes that participants maximize a guaranteed level of performance under a presumed level of uncertainty in estimation of interval *D*. They show that the data in Figure 9e can be fit slightly better by a performance curve based on this assumption than the performance curve for RA.

Alternatively, thresholds higher than optimal are also predicted under the assumption that participants try to maximize RR using an adaptive threshold adjustment procedure (e.g., driven by a reinforcement learning mechanism): In previous work we have shown that if participants do not have a priori knowledge of the threshold that maximizes RR, and therefore must discover this through adaptive approximation, on average they will achieve a higher RR by overestimation of the optimal threshold than by underestimation. This is based on the observation that, for the DDM, there is a greater drop off in RR for thresholds below the optimum than above it (Bogacz et al., 2006). Timing uncertainty (contributing to variability in reward rate estimation for a given threshold) could further contribute to this effect. This would force participants to optimize a distribution of thresholds which, for the reasons stated above, would have a mean value that is greater than the single optimal threshold.

Thresholds higher than optimal can also be expected under the standard economic assumption that the participants discount future rewards, i.e. they value a present reward more than the same reward in the future. If so, then participants value the reward on the current trial more than rewards on succeeding trials, and to increase the probability of reward on the current trial, they increase the threshold (Yakov Ben-Haim, personal communication, 20^{th} July 2007). Adjudicating between these alternative hypotheses will require further research.

It has been demonstrated in a perceptual discrimination task that if participants receive rewards for correct choices and no penalty for errors, then the possibility of winning a prize for best performing participants tends to move participants' discrimination thresholds towards those maximizing reward (Markman, Baldwin, & Maddox, 2005). Thus it is possible that the reward of $100 in our experiments also encouraged participants to set decision thresholds closer to those maximizing RR, and may partially explain why so many participants were nearly optimal.

Motivational effects of the reward for best participants could also explain the gender effects in Figure 8a. Namely, it is possible that the reward of $100 in our experiments was on average more motivating for males than females, since males have been reported to be more motivated by competition (Gneezy & Rustichini, 2004) and have a higher tendency to gamble (Desai, Maciejewski, Pantalon, & Potenza 2005).

Figure 7 shows that there is a better match between optimal and participants' thresholds in the easy task condition than in the difficult one, and a number of speculations are possible about why this is the case. First, for a difficult task or long *D _{total}* (or for both), the optimal threshold may be high and correspond to a firing rate of decision neurons above the range in which their firing rates are linear functions of input. Since our theory is based on the simple linear DDM, it cannot predict the value of the optimal threshold outside of the linear range. Simple phase plane methods show that drift rates will slow down as firing rates exceed the linear range (Brown & Holmes, 2001), yielding longer integration times for a given level of accuracy. Attempting to capture these effects with a (constant-drift) DDM may result in the higher-than-optimal threshold predictions. The analysis required to investigate this is beyond the scope of this article.

At the same time, for a very difficult task and *short D _{total}*, the optimal threshold may be very low, corresponding to near random responding. However, participants may feel uncomfortable producing random responses. There may also be a computational disadvantage in keeping the threshold fixed near zero. If the information content of the input increases (e.g., the task becomes easier), this would go undetected (since no integration is occurring). Thus, higher level considerations (e.g., knowledge about non-stationarity of the environment) may also influence threshold setting, and drive participants to raise thresholds above zero (at least from time to time), providing another possible explanation for thresholds

It is also possible that if the task is very difficult, participants may explore strategies other than simple diffusion-type accumulation of information. For example, in Experiment 2 they may have counted the asterisks. Indeed, one participant reported adopting such a strategy in the questionnaire: “If it is hard to identify, I will roughly count each row.” This participant produced very long RTs – up to 9.1s. Two participants reported a different change in strategy: “On the tougher ones […] I started looking for empty spaces instead of stars.” A consideration of such alternate strategies is also beyond the range of this article, although it is certainly an important target for cognitive and neuroscientific research and indeed has received much attention in cognitive science (e.g., Newell & Simon, 1988). In any event, the engagement of such alternate strategies would of course weaken the relationship between observed performance and that predicted by the DDM. However, this observation does suggest an intriguing possibility: If the DDM, because of its intrinsic simplicity, and its optimality for simple decision making, is highly conserved, then it should provide a progressively more accurate account of decision making performance for organisms that have progressively simpler cognitive architectures, and therefore have less access to more complex strategies. Similarly, it should be possible to promote optimal performance in human participants by interfering with higher cognitive functions subserving complex decision making strategies.

The observation that some participants were able to find decision thresholds close to the optimal values raises the question of how this was accomplished. It is most likely that they determine the threshold using an adaptive procedure: Different values of threshold are set, their effect on a criterion is observed, and the threshold is adjusted to optimize the criterion. Several models of such adaptation have been proposed (Erev, 1998; Myung & Busemeyer, 1989; Simen et al., 2006).

Myung and Busemeyer (1989) tested the predictions of threshold adaptation models in the experiment in which the participants were required to minimize a weighted difference between ER and RT. In their experiment the effects of learning were observed on a much longer time-scale than we observed: Their analysis suggested that the threshold was converging to the optimal value within 100-200 trials, while we observed much faster convergence (within ~20 trials). There are a number of factors that can contribute to this difference. First, Myung and Busemeyer plotted the precise threshold value on each trial (which was allowed by their experimental design), rather than noisy functions of thresholds, hence from this point of view their analysis was more precise. On the other hand, our study generated substantially more data: Myung and Busemeyer calculated the value of threshold on a given trial as the average over only 6 learning episodes for one condition, and 4 episodes for the other condition. By contrast in Figure 10a we calculated the difference in RT between delay conditions on a given trial as the average over 700 learning episodes (20 participants of Experiment 1 × 5 blocks + 60 participants of Experiment 2 × 10 blocks).

Furthermore, it can be speculated that we observed faster threshold convergence because RR is a more ecologically relevant criterion than the weighted difference between ER and RT, and because humans' threshold adjustment mechanisms are tuned to maximize RR. Recently Simen et al. (2006) proposed such a specialized mechanism that is able to rapidly find the threshold maximizing RR. Future work could compare the predictions of different models of threshold adaptation with experimental data.

This work was supported by EPSRC grant EP/C514416/1 and PHS grants MH58480 and MH62196 (Cognitive and Neural Mechanisms of Conflict and Control, Silvio M. Conte Center). We thank Joshua Gold for providing us with the program to generate the “moving dots” stimuli, and for discussions, Patrick Simen for reading the manuscript and very useful comments, Eric Shea-Brown and Jeff Moehlis for advice on the planning and analysis of the experimental data, and Miriam Zackenhouse, Yakov Ben-Haim, and Paul Howard-Jones for discussion.

Here we show how to obtain the relationship between ER and DT that holds for thresholds that optimize RR. First, note that Equations 6 can be solved for *ã* and (cf. Wagenmakers et al., 2007):

$$\stackrel{\sim}{a}=\frac{1-2\text{ER}}{2\text{DT}}\text{log}\left(\frac{1-\text{ER}}{\text{ER}}\right),\phantom{\rule{0.5em}{0ex}}\text{and}\phantom{\rule{0.5em}{0ex}}\stackrel{\sim}{z}=\frac{\text{DT}}{1-2\text{ER}}.$$

(17)

Substituting Equations 17 into 8 and rearranging terms, we obtain the optimal performance curve for RR (Bogacz et al., 2006):

$$\frac{\text{DT}}{{D}_{\mathit{\text{total}}}}={\left(\frac{1}{\text{ER ln}\frac{1-\text{ER}}{\text{ER}}}+\frac{1}{1-2\text{ER}}\right)}^{-1}.$$

(18)

The curves for RA and RR_{m} can be obtained in an analogous way and are given by Equations 19 and 20 respectively (Bogacz et al., 2006):

$$\frac{\text{DT}}{{D}_{\mathit{\text{total}}}}=\frac{E-2q-\sqrt{{E}^{2}-4q(E+1)}}{2q},\text{where}\phantom{\rule{0.3em}{0ex}}E=\frac{1}{\text{ER ln}\frac{1-\text{ER}}{\text{ER}}}+\frac{1}{1-2\text{ER}},$$

(19)

$$\frac{\text{DT}}{{D}_{\mathit{\text{total}}}}=(1+q){\left(\frac{\frac{1}{\text{ER}}-\frac{q}{1-\text{ER}}}{\text{ln}\frac{1-\text{ER}}{\text{ER}}}+\frac{1-q}{1-2\text{ER}}\right)}^{-1}.$$

(20)

Note that for the normalized decision time to be well defined in Equation 19 the weight *q* must satisfy:

$$q\le \text{min}\left\{\frac{{E}^{2}}{4(E+1)}\right\}=\frac{{E}_{\text{min}}^{2}}{4({E}_{\text{min}}+1)}.$$

(21)

Numerically, we find that the lowest value *E* can take is *E _{min}* ≈ 5.224, implying that

Pearson's correlation coefficient between two samples *x*_{1}, …, *x _{n}*, and

$$\text{r}=\frac{Av((x-Av(x))(y-Av(y)))}{\sqrt{Av\left({(x-Av(x))}^{2}\right)Av\left({(y-Av(y))}^{2}\right)}},$$

(22)

where *Aν* denotes the average. This coefficient assumes that the two random variables generating the samples have different means – note that in Equation 22 *Aν*(*x*) and *Aν*(*y*) are estimated separately. If we assume that the two variables have equal means, we can estimate the mean from both sets of samples *m* = *Aν*(*x*, *y*), and use it in the definition of the modified correlation coefficient:

$${\text{r}}_{\text{1}}=\frac{Av((x-m)(y-m))}{\sqrt{Av\left({(x-m)}^{2}\right)Av\left({(y-m)}^{2}\right)}}.$$

(23)

The significance of a particular value *R* of r_{1} for a given sample size *n* was found using the following Monte-Carlo simulation. In each of 10,000 iterations, *n* pairs of samples were generated from independent normal standard distributions, and r_{1} was computed. The significance was taken as a fraction of iterations in which |r_{1}| > |*R*| (corresponding to a two-tailed test).

^{1}Although these new parameters are ratios, they have units: has units of time and *ã* has units of 1/time (Bogacz et al., 2006); and while listing their values we take units of time to be seconds.

^{2}By differentiating Equation 8 with respect to *D* and noting that the derivative of * _{o}* is always positive.

^{3}Equation 7 includes ER and DT, for which there are no analytic expressions for the extended DDM, but we evaluated them by integrating numerically over a distributions of *A*, the expressions for ER and DT for a DDM with a variability of *x*(0) (given in Bogacz et al., 2006).

^{4}RR_{m} would describe reward rate in a modified paradigm in which a participant receives a unit of reward for a correct choice and loses *q* units of reward for an error. Thus in the standard paradigm *q* describes the relative negative utility of committing an error.

^{5}Complete behavioural data from both experiments and the estimated parameters of DDM for all participants can be downloaded from: http://www.cs.bris.ac.uk/home/rafal/optimal/data.html

^{6}We estimate the ratios *ã* and rather than the original parameters of the DDM *A*, *c*, and *z*, because it is not possible to estimate *A*, *c*, and *z* uniquely (see Section Drift Diffusion Model). One could also fit different ratios (e.g. *A/c* and *z/c*), but we use *ã* and for consistency with the theoretical treatment, in which these ratios best simplified the mathematical analysis. Although we estimate (= *z/A*) rather than *z*, we will still be able to determine whether a participant chooses the optimal threshold *z _{o}*: recall that Equation 5 implies that the optimal normalized threshold

^{7}The variance of ER can be estimated as follows: Assume, for a given condition, that a participant has a probability of making an error equal to *p _{e}* and there are

^{8}Wagenmakers et al. fitted DDMs with an assumption that parameter *c* is fixed. They observed that the pure DDM underestimates parameter *A* in comparison to the extended model. Since the definitions of *ã* and involve ratios of *A*, see Equations 5, it implies that the pure DDM underestimates *ã* (proportional to *A*^{2}) and overestimates (inversely proportional to *A*).

^{9}Note that since we found no significant differences in signal to noise ratios across conditions within participants (see Section Estimating Parameters of the DDM), it is reasonable to assume that drift *A* is also the same in all delay conditions, and all thresholds in Figure 5a are therefore normalized by the same value of drift. The plot comparing non-normalized thresholds of this participant with non-normalized optimal thresholds would look exactly the same as Figure 5a, but both scales of the axes would be divided by the value of the drift. So the non-normalized thresholds of this participant are also very close to the optimal values (and there is the same correlation r_{1}=0.89 between the participants' and optimal non-normalized thresholds).

^{10}Note however that this does not imply that there is equal correlation between non-normalized participants' and optimal thresholds, because Figure 5c includes data from all participants and each participant has thresholds normalized by a different value of drift; unfortunately it is more difficult to say what the correlation would be between the non-normalized thresholds.

^{11}However, both ER and DT did vary significantly for all other pairwise comparisons between conditions, with p ranging from p = 0.018 to p<10^{-4} (without correcting for multiple comparisons).

^{12}Two participants from Experiment 1, and six participants from Experiment 2.

^{13}For each participant and condition we took the empirically observed ER and used Equation 18 to calculate the theoretically predicted optimal normalized DT. Then, for each of the ER intervals, we used paired t-tests to compare the distribution of experimentally observed mean normalized DTs with the distribution of predicted values. The difference was significant only in the first ER interval [0-5%], p = 0.003, and non-significant, p > 0.05 for all other ER intervals). The reason for the difference in the interval with the highest ER may be that the first interval contained many blocks with ER = 0, and Equation 18 predicts that for these blocks DT = 0, but participants had DT > 0.

^{14}In the experiment of Simen et al. (submitted) the delays between response and the onset on the next trial were the same after correct and incorrect responses (i.e. *D _{p}*=0) in all blocks.

- Akaike H. Likelihood of a model and information criteria. Journal of Econometrics. 1981;16:3–14.
- Bogacz R, Brown E, Moehlis J, Holmes P, Cohen JD. The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced choice tasks. Psychological Review. 2006;113:700–765. [PubMed]
- Bogacz R, Cohen JD. Parameterization of connectionist models. Behavioral Research Methods, Instruments, and Computers. 2004;36:732–741. [PubMed]
- Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10:433–436. [PubMed]
- Brown E, Holmes P. Modeling a simple choice task: stochastic dynamics of mutually inhibitory neural groups. Stochastics and Dynamics. 2001;1:159–191.
- Busemeyer JR, Rapoport A. Psychological models of deferred decision making. Journal of Mathematical Psychology. 1988;32:91–134.
- Busemeyer JR, Townsend JT. Decision field theory: A dynamic-cognitive approach to decision making in uncertain environment. Psychological Review. 1993;100:432–459. [PubMed]
- Desai RA, Maciejewski PK, Pantalon MV, Potenza MN. Gender Differences in Adolescent Gambling. Annals of Clinical Psychiatry. 2005;17:249–258. [PubMed]
- Edwards W. Optimal strategies for seeking information: Models for statistics, choice reaction times, and human information processing. Journal of Mathematical Psychology. 1965;2:312–329.
- Erev I. Signal detection by human observers: a cut-off reinforcement learning model of categorization decisions under uncertainty. Psychological Review. 1998;105:280–298. [PubMed]
- Feller W. An introduction to probability theory and its applications. New York: Wiley; 1968.
- Franks NR, Dornhaus A, Fitzsimmons JP, Stevens M. Speed versus accuracy in collective decision making. Proceedings of the Royal Socieety, B. 2003;270:2457–2463. [PMC free article] [PubMed]
- Gneezy U, Rustichini A. Gender and Competition at a Young Age. The American Economic Review. 2004;94:377–381.
- Gold JI, Shadlen MN. Banburismus and the brain: decoding the relationship between sensory stimuli, decisions, and reward. Neuron. 2002;36(2):299–308. [PubMed]
- Gold JI, Shadlen MN. The influence of behavioral context on the representation of a perceptual decision in developing oculomotor commands. Journal of Neuroscience. 2003;23:632–651. [PubMed]
- Holmes P, Shea-Brown E, Moehlis J, Bogacz R, Gao J, Aston-Jones G, et al. Optimal decisions: From neural spikes, through stochastic differential equations, to behavior. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences. 2005;88:2496–2503.
- Laming DRJ. Information theory of choice reaction time. New York: Wiley; 1968.
- Laming DRJ. Choice reaction performance following an error. Acta Psychologica. 1979;43:199–224.
- Maddox WT. Toward a unified theory of decision criterion learning in perceptual categorization. Journal of the Experimental Analysis of Behaviour. 2002;78:567–595. [PMC free article] [PubMed]
- Maddox WT, Bohil CJ. Base-rate and payoff effects in multidimensional perceptual categorization. Journal of Experimental Psychology: Learning, Memory and Cognition. 1998;24:1459–1482. [PubMed]
- Maritz JS, Jarrett RG. A Note on Estimating the Variance of the Sample Median. Journal of the American Statistical Association. 1978;73:194–196.
- Markman AB, Baldwin GC, Maddox WT. The interaction of payoff structure and regulatory focus in classification. Psychological Science. 2005;16:852–855. [PubMed]
- Masson MEJ, Loftus GR. Using confidence intervals for graphically based data interpretation. Canadian Journal of Experimental Psychology. 2003;57:203–220. [PubMed]
- Mozer MC, Colagrosso MD, Huber DH. A rational analysis of cognitive control is a speeded discrimination task. In: Dietterich T, Becker S, Ghahramani Z, editors. Advances in Neural Information Processing Systems. Vol. 14. Cambridge MA: MIT Press; 2002. pp. 51–57.
- Myung IJ, Busemeyer JR. Criterion learning in a deferred decision making task. American Journal of Psychology. 1989;102:1–16.
- Newell A, Simon HA. The theory of human problem solving. In: Collins AM, Smith EE, editors. Readings in cognitive science: A perspective from psychology and artificial intelligence. San Mateo, CA: Morgan Kaufmann; 1988. pp. 33–51.
- Pachella RG. The interpretation of reaction time in information-processing research. In: Kantowitz B, editor. Human information processing: tutotial in performance and recognition. Hillsdale, NJ: Erlbaum; 1974. pp. 41–82.
- Palmer J, Huk AC, Shadlen MN. The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of Vision. 2005;5:376–404. [PubMed]
- Pelli DG. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision. 1997;10:437–442. [PubMed]
- Rapoport A, Burkheimer GB. Models for deferred decision making. Journal of Mathematical Psychology. 1971;8:508–538.
- Ratcliff R. A theory of memory retrieval. Psychol Rev. 1978;83:59–108.
- Ratcliff R. Modeling response signal and response time data. Cognitive Psychology. 2006;53:195–237. [PMC free article] [PubMed]
- Ratcliff R, Gomez R, McKoon G. A diffusion model account of the lexical decision task. Psychological Review. 2004;111:159–182. [PMC free article] [PubMed]
- Ratcliff R, McKoon G. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput. 2008;20(4):873–922. [PMC free article] [PubMed]
- Ratcliff R, Rouder JN. A diffusion model account of masking in two-choice letter identification. Journal of Experimental Psychology: Human Perception and Performance. 2000;26:127–140. [PubMed]
- Ratcliff R, Smith PL. Comparison of sequential sampling models for two-choice reaction time. Psychological Review. 2004;111(2):333–367. [PMC free article] [PubMed]
- Ratcliff R, Thapar A, McKoon G. A diffusion model analysis of the effects of aging on brightness discrimination. Perception and Psychophysics. 2003;65:523–535. [PMC free article] [PubMed]
- Ratcliff R, Tuerlinckx F. Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychonomic Bulletin & Review. 2002;9:438–481. [PMC free article] [PubMed]
- Ratcliff R, Van Zandt T, McKoon G. Connectionist and diffusion models of reaction time. Psychological Review. 1999;106:261–300. [PubMed]
- Rowan T. Functional Stability Analysis of Numerical Algorithms. University of Texas; Austin: 1990.
- Shadlen MN, Newsome WT. Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol. 2001;86(4):1916–1936. [PubMed]
- Simen PA, Buck C, Hu P, Holmes P, Cohen JD. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions submitted. [PMC free article] [PubMed]
- Simen PA, Cohen JD, Holmes P. Rapid decision threshold modulation by reward rate in a neural network. Neural Networks. 2006;19:1013–1026. [PMC free article] [PubMed]
- Stone M. Models for choice reaction time. Psychometrika. 1960;25:251–260.
- Usher M, McClelland JL. The time course of perceptual choice: the leaky, competing accumulator model. Psychol Rev. 2001;108(3):550–592. [PubMed]
- Vandekerckhove J, Tuerlinckx F. Fitting the Ratcliff diffusion model to experimental data. Psychonomic Bulletin & Review. 2007;14:1011–1026. [PubMed]
- Vickers D. Evidence for an accumulator model of psychophysical discrimination. Ergonomics. 1970;13:37–58. [PubMed]
- Wagenmakers EJ, van der Maas HLJ, Grasman RPPP. An EZ-diffusion model for response time and accuracy. Psychonomic Bulletin & Review. 2007;14:3–22. [PubMed]
- Wald A, Wolfowitz J. Optimum character of the sequential probability ratio test. Annals of Mathematical Statistics. 1948;19:326–339.
- Wickelgren WA. Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica. 1977;41:67–85.
- Zackenhouse M, Holmes P, Bogacz R. Robust versus optimal strategies for two-alternative forced choice tasks submitted. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |