We developed a task that assessed the degree to which risk of punishment (footshock) influenced reward choice. This risky decision-making task differs from previous conflict paradigms such as the Geller-Seifter conflict and the thirsty-rat conflict tasks (File et al., 2003
) in that 1.) rats are given a choice between the potentially punished response and a second, safe alternative response, and 2.) the risky response was only accompanied by punishment according to a specific probability that shifted within each session. We found that rats reliably shifted preference from a large, risky reward to a small, safe reward as the risk of punishment accompanying the large reward increased. This preference was mediated by the magnitude of the punishment, as preference for the small safe reward increased with greater shock intensity. Importantly, rats were able to recognize changes in punishment risk within sessions, as reward preference shifted as risk was altered. During initial testing, the risky reward began with 0% probability of shock and systematically ascended to 100% probability. It may be argued that the performance curves observed were a result of a lack of motivation due to satiation or frustration as trial blocks progressed. However, reversing the order of risk presentations did not alter performance (i.e., rats continued to show increased preference for the small safe reward with greater risks of punishment) and thus it is unlikely that the order of risk presentations was the cause of the observed pattern of reward preference.
Performance in the risky decision-making task showed large between-subjects variability (at least at lower shock intensities), but was stable across multiple test sessions over the course of several months and was not related to differences in body weight or shock reactivity in rats trained at the 0.35 mA shock intensity. In rats trained at the 0.4 mA shock intensity only, there was a correlation between body weight and risk-taking behavior such that heavier rats preferred the large, risky reward. This may have been due to the differences in body weight altering the experience of the footshock (e.g., the shock was less aversive to heavier rats), although there was no correlation between weight and shock reactivity in the 0.4 mA group. It is also possible that rats with higher baseline weights were simply more highly motivated to obtain food rather than less influenced by the shock; however, this alternative explanation also seems unlikely, as rats showed no shift in reward preference after a 24-hour satiation period (which caused an increase in body weight). This latter finding was somewhat surprising, as satiation can reduce food’s motivational value, resulting in reduced choice of the devalued food (Johnson et al., 2009
). While we did observe an increase in overall omitted trials after satiation (indicating the effectiveness of this procedure), satiation did not significantly affect preference for the large, risky reward. This finding could indicate that choice behavior in this task (but not overall responding) is only minimally controlled by reinforcer value. Alternatively, it is possible that the long duration of testing experienced by the rats by that point in the experiment resulted in choice behavior being mediated by “stimulus-response” - type mechanisms (and thus less controlled by reinforcer value (Balleine and Dickinson, 1998
Further analysis of the individual variability demonstrated that risky decision-making performance was related to each subject’s selection of the large reward during the initial block, even though the reward was accompanied by a 0% probability of footshock during this block. Despite this observation, the patterns of decision-making observed here likely were not solely a function of response perseveration from baseline levels of responding. Rats were able to adjust their baseline responding throughout the 20 days of training (such that the relationship between baseline responding and overall responding shifted throughout training) (Simon et al., unpublished observations). Additionally, a separate experiment showed that rats were able to adjust responding when the shock intensity was increased (Simon et al., 2008a
). Although the fact that some rats failed to choose the large reward even under 0% risk conditions is somewhat surprising, it can likely be accounted for by “carryover” effects from the previous day’s training (i.e. – because in the final block of trials, they were always shocked when choosing the large reward, this experience likely biased their choices on the following day). In support of this possibility, choice of the large reward in the 0% risk block varied directly with shock intensity (), even though no shocks were received in this block.
The individual differences in risky decision-making observed in this task may mimic the diversity in propensity for risk-taking observed in human subjects (DeVito et al., 2008
; Gianotti et al., 2009
; Lejuez et al., 2003
; Reyna and Farley, 2006
; Sobanski et al., 2008
; Taylor Tavares et al., 2007
; Weber et al., 2004
) but, importantly, the use of an animal model allows a degree of experimental control that is not possible in human studies. Thus, any behavioral differences observed are more likely due to intrinsic rather than experiential factors. This variability should prove useful in future studies for identifying behavioral and neurobiological correlates of risky decision-making.
Risky decision-making behavior was compared to behavior in two other reward-related decision-making tasks: delay discounting (commonly used to measure impulsive choice (Ainslie, 1975
; Evenden and Ryan, 1996
; Simon et al., 2007b
; Winstanley et al., 2006a
)) and probability discounting (characterized as an assessment of risky behavior (Cardinal and Howes, 2005
; St Onge and Floresco, 2009
)). Correlational evidence suggests that rats with a preference for the large, risky reward in the risky decision-making task also demonstrate preference for the large reward in the probability discounting task (preference for the large, probabilistic reward over the small, certain reward). These data suggest that either the assessment of probabilities (of punishment and reward omission, respectively) or the integration of probabilistic information with reward value may be mediated by similar neurobiological mechanisms. Conversely, rats with a greater propensity for risky choice did not consistently demonstrate greater impulsive choice in the delay discounting task (preference for the small, immediate reward over the large, delayed reward). Although this runs counter to some theoretical and experimental data suggesting similarities between the influence of delay and probability on reward value (see Hayden and Platt, 2007
; Yi et al., 2006
), other findings suggest that integration of delays with reward value requires a different set of neural substrates than probability assessment (Cardinal, 2006
; Floresco et al., 2008
; Kobayashi and Schultz, 2008
; Mobini et al., 2000
; Schultz et al., 2008
). It could be argued that the correlation between performance in the risky decision-making and probability discounting tasks was a result of response perseveration, as both tasks used the same levers to produce the small and large rewards. However, rats were tested in the delay discounting task after the risky decision-making task, and demonstrated a sizable shift in behavior. Rats then shifted their behavior again when tested in the probability discounting task. Were perseveration the only explanation for the similarity between the risky decision-making and probability discounting tasks, the performance curves would be expected to follow similar trends for all three tasks.
Systemic amphetamine administration produced a dose-dependent increase in risk aversion, shifting rats’ preference toward the “safe” reward. It is possible that this shift in behavior, which led to an overall reduction in food consumption, was a result of amphetamine-induced suppression of food intake (Wellman et al., 2008
). However, neither 1- nor 24-hour periods of free feeding prior to testing had an effect on reward choice, although 24-hour free-feeding did increase the number of trials omitted (an effect that was also observed under amphetamine). Additionally, when acute amphetamine was tested in the delay discounting task, reward preference was shifted in the opposite direction, toward greater choice of the large, delayed reward, resulting in greater food consumption (Simon et al., 2007a
). Thus, it seems unlikely that amphetamine altered reward choice simply by altering hunger levels or food motivation. Another possibility is that the increased preference for the small, safe reward induced by amphetamine was a result of hypersensitivity to footshock. This explanation seems unlikely for two reasons: first, amphetamine has been characterized as an analgesic agent (Connor et al., 2000
; Drago et al., 1984
). If pain sensitivity were indeed the critical mediator of reward selection in this task, rats given amphetamine would be expected to find the shock less aversive and shift their preference toward the large, risky reward as a result of a higher pain threshold. Second, amphetamine did not alter locomotion during the footshock, which can be used as a behavioral marker for pain/shock sensitivity (Chhatwal et al., 2004
Interestingly, results similar to those found here with amphetamine have been obtained in human subjects with various psychopathological disorders. Children with ADHD and patients with fronto-temporal dementia show reduced risky choices in the Cambridge gambling task when treated with methylphenidate, a monoamine reuptake inhibitor with effects similar to amphetamine (DeVito et al., 2008
; Rahman et al., 2005
). As methylphenidate is thought primarily to affect decision-making through actions on prefrontal cortex (Berridge et al., 2006
), a structure that has been implicated in risky decision-making (Bechara et al., 2000
; Clark et al., 2008
; St. Onge and Floresco, 2008
), it is possible that alterations in prefrontal cortex activity are responsible for the changes in risk-taking behavior observed after administration of amphetamine (in rats) or methylphenidate (in humans).
The amphetamine-induced decrease in risk-taking behavior observed in this study contrasts with the increased risk-taking behavior in rats tested in a probability discounting task observed by St. Onge & Floresco (2009)
. Although both tasks involve assessment of probabilities (indeed, we observed that reward choice was correlated between these two tasks), it is possible that amphetamine affects these types of decision-making in different ways. The risky decision-making task utilized in the current study used probabilities of punishment rather than reward omission as the discounting factor associated with the large reward. The difference in amphetamine’s effects may be a result of dopaminergic mediation of aversive states induced by expectation of footshock. The same mesolimbic dopaminergic structures implicated in reward (such as nucleus accumbens and ventral tegmental area) also appear to be involved with emotional reactions to aversive stimuli (Carlezon and Thomas, 2009
; Liu et al., 2008
; Setlow et al., 2003
). Thus, it is possible that amphetamine-induced enhancements in dopamine transmission increase the ability of aversive stimuli to control behavior (rather than solely enhancing the influence of rewarding stimuli), which could explain the amphetamine-induced shift in reward choice away from the large, risky reward. This explanation is consistent with previous findings showing that acute amphetamine administration at doses similar to those used here increased the degree to which rats avoided making a response that produced an aversive conditioned stimulus previously associated with footshock (i.e. – amphetamine increased control over responding by the aversive conditioned stimulus (Killcross et al., 1997
Somewhat surprisingly, cocaine administration did not affect risky decision-making in the same manner as amphetamine. Subjects given cocaine at relatively high doses, although not high enough to confound performance with excessive stereotypy (Wellman et al., 2002
), no longer demonstrated a shift in reward choice with increasing risk of punishment. This may be a result of a cocaine-induced enhancement in response perseveration (i.e., an inability to shift choice from the large reward to the smaller reward across the course of the session). Indeed, enhancements in perseverative behavior have been observed in human cocaine but not amphetamine abusers (Ersche et al., 2008
). Interestingly, the dissociation between the effects of amphetamine and cocaine may be due in part to cocaine’s relatively higher affinity for the serotonin (5-HT) transporter (White and Kalivas, 1998
). It has been suggested that 5-HT signaling may be critically involved in prediction of punishment (Daw et al., 2002
). As acute depletion of the 5-HT precursor tryptophan enhances predictions of punishment in human subjects (Cools et al., 2008
), it is possible that enhancements in 5-HT neurotransmission by cocaine might impair such predictions, resulting in apparent insensitivity to risk of punishment.
Another possibility is that the previous exposure to amphetamine influenced the subjects’ response to subsequent acute cocaine administration. A previous regimen of chronic cocaine administration can produce tolerance to cocaine’s acute effects on decision-making (Winstanley et al., 2007
), although chronic amphetamine fails to influence the acute effects of amphetamine in a similar manner (Stanis et al., 2008
). While this possibility cannot be entirely ruled out, it seems unlikely for behavioral tolerance to manifest itself given the short regimen of amphetamine administered to the subjects (three injections of ≤ 1.5 mg/kg across six days), as tolerance to the effects of psychostimulants on cognition has only been demonstrated with considerably higher doses and much longer regimens (Dalley et al., 2005
; Simon et al., 2007a
; Winstanley et al., 2007
An interesting aspect of performance during this experiment is the discrepancy in reward choice between cocaine- and saline-exposed trials during the first block (0% risk). While there were no statistically significant differences between treatments, the lowest dose of cocaine caused a near-significant reduction in selection of the large reward during this block. This maladaptive shift in decision-making could be a result of an impaired ability to discriminate between the response levers, perhaps due to the anxiogenic properties of acute cocaine (Goeders, 1997
Elevated risk-taking is characteristic of many psychopathological disorders, and can lead to persisting financial, social, and medical problems. A better understanding of the behavioral and neural substrates underlying risky decision-making will allow more efficacious treatment of patients affected adversely by excessive risk-taking. The risky decision-making task described here offers a novel method of assessing the role of punishment risk in decision-making. Given the large between-subjects variability and high test-retest reliability, this task may have great utility as a model of human risk-taking behavior, and for further investigation of its neurobiological substrates.