PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of cercorLink to Publisher's site
 
Cereb Cortex. 2009 May; 19(5): 1019–1027.
Published online 2008 October 8. doi:  10.1093/cercor/bhn147
PMCID: PMC2665154

Functional Dissociations of Risk and Reward Processing in the Medial Prefrontal Cortex

Abstract

Making a risky decision is a complex process that involves evaluation of both the value of the options and the associated risk level. Yet the neural processes underlying these processes have not so far been clearly identified. Using functional magnetic resonance imaging and a task that simulates risky decisions, we found that the dorsal region of the medial prefrontal cortex (MPFC) was activated whenever a risky decision was made, but the degree of this activity across subjects was negatively correlated with their risk preference. In contrast, the ventral MPFC was parametrically modulated by the received gain/loss, and the activation in this region was positively correlated with an individual's risk preference. These results extend existing neurological evidence by showing that the dorsal and ventral MPFC convey different decision signals (i.e., aversion to uncertainty vs. approach to rewarding outcomes), where the relative strengths of these signals determine behavioral decisions involving risk and uncertainty.

Keywords: decision making, fMRI, neuroeconomics, reward, risk

Introduction

For decisions under uncertainty, such as whether or not to buy a lottery or insurance, each option yields various possible outcomes (i.e., values) with different probabilities (i.e., risks). Successful risky decision-making, as defined by the choice of the risky option when reward contingencies favor risk and the choice of the riskless option when reward contingencies favor playing it safe, relies on evaluation of both the values of the options and the associated risk levels. Together they determine whether the risk is taken or avoided. Although the medial prefrontal cortex (including Broadman Area [BA] 10, BA32, and BA 25) has been identified as one critical structure in a neural system subserving risky decision making (Damasio 1994; Glimcher and Rustichini 2004; Bechara and Damasio 2005), the neural processes subserving these motivational tendencies in decision-making have not been identified. Specifically, although medial prefrontal cortex (MPFC) lesions lead to overly risky and disadvantageous behaviors (Bechara et al. 2000; Fellows and Farah 2005, 2007; Weller et al. 2007), the lesion studies conducted to date have not permitted the separation of the 2 putative processes that support the evaluation of risk versus the evaluation of outcomes. Therefore, the primary aim of this study is to use a functional magnetic resonance imaging (fMRI) approach to elucidate the neural processes subserving the fear of uncertainty versus the lure of reward in risky decision-making.

Neuroeconomic studies have mainly focused on the role of reward processing in decision making under risk. The mesolimbic dopaminergic (DA) system, including the ventromedial prefrontal cortex and ventral striatum/nucleus accumbens (NAcc) is critically involved in the computation of the value of outcomes (Delgado et al. 2000; Elliott et al. 2000, 2003; Rolls 2000; O'Doherty et al. 2001, 2003; Spiro 2001; Ernst et al. 2004; Kable and Glimcher 2007; Liu et al. 2007; Tom et al. 2007). This system is tuned up with increasing gain and tuned down (with much steeper slope) with increasing punishment (Tom et al. 2007). Patients with Ventromedial Prefrontal Cortex (VMPFC) lesions are less consistent in their choices in very simple preference judgment tasks (Fellows and Farah 2007), and so are individuals with substance abuse problems; these patients show abnormal sensitivity to reward and elevated risk-seeking behavior in comparison to healthy controls (Bechara et al. 2002; Bechara 2005; Tanabe et al. 2007).

Despite its significant explanatory power, the single reward processing mechanism has not led to a satisfactory understanding of the decision impairments observed in some of the patients with VMPFC lesions who displayed intact processing of reward levels (Bechara et al. 2002; Bechara 2005; Tanabe et al. 2007). This points to the possibility that other factors, such as risk sensitivity, might also be important in understanding individuals’ risky decision making (Holt and Laury 2002; Fiorillo et al. 2003, 2005; Hsu et al. 2005; McCoy and Platt 2005; Huettel 2006; Huettel et al. 2006; Preuschoff et al. 2006; Tobler et al. 2007). The midbrain dopamine neurons (Fiorillo et al. 2003, 2005), ventral striatum (Preuschoff et al. 2006), posterior cingulate cortex (McCoy and Platt 2005), the parietal (Huettel et al. 2006) and the lateral orbitofrontal cortices (Hsu et al. 2005; Tobler et al. 2007) have been found to be involved in the processing of risk. Building onto previous lesion studies with MPFC patients (Bechara et al. 2000; Fellows and Farah 2005, 2007; Weller et al. 2007), the present study aims at obtaining a better understanding of the role of the MPFC in risky decision making in healthy adults. To do this, we used fMRI and a novel risky decision task (i.e., The Cups task) that has been shown to detect the risky decision-making impairments that MPFC patients exhibit (Weller et al. 2007).

Materials and Methods

Subjects

Thirteen healthy adults participated in this study (8 males, 23.6 ± 6 years of age, ranging from 18 to 39). All subjects had normal or corrected-to-normal vision. They were free of neurological or psychiatric history and gave informed consent to the experimental procedure, which was approved by the University of Southern California Institutional Review Board.

The Cups Task

To assess the neural mechanisms of risky decisions, we used a computerized version of the Cups task (Weller et al. 2007). The Cups task includes a Gain domain and a Loss domain. Subjects were to win as much money as possible in the Gain domain and to lose as little money as possible in the Loss domain. For both Gain domain trials and Loss domain trials, subjects were required to choose between a risky option and a safe option. The safe option is to win or lose $1 for sure, whereas the risky option could lead to a probability (0.20, 0.33, or 0.50) of a larger win ($2, $3, or $5) or win nothing otherwise in the Gain domain, and to a possibility of losing more ($2, $3, or $5) or losing nothing otherwise in the Loss domain. Within each domain probability and outcome magnitude of the risky option were manipulated such that some combinations of probability and outcome magnitude create equal expected value (EQEV) for the risky and safe options: 0.20 × 5, 0.33 × 3, and 0.50 × 2 on both gain and loss trials, which provide an ideal measure of participants’ risk preference. Some combinations are slightly risk advantageous (RA), meaning that the expected value (EV) is more favorable for the risky option than for the safe (riskless) option: 0.33 × 5, 0.50 × 3 on Gain domain; 0.20 × 3, 0.33 × 2 on Loss domain. Some combinations are slightly risk-disadvantageous (RD), meaning that the EV is more favorable for the safe option: 0.20 × 3, 0.33 × 2 on Gain domain; 0.33 × 5, 0.50 × 3 on Loss domain. The 2 combinations with the biggest differences in EV between risk and safe options (i.e., 0.20 × 2 and 0.50 × 5), originally included in the patient study, were excluded in the present study because the pilot data on healthy young adults indicated that these types of trials exhibited no sensitivity to individuals’ attitude toward risk.

On each trial, an array of 2, 3, or 5 cups is shown on one side of the screen, with the possible gain (+) or loss (−) shown on top (Fig. 1). This array is identified as the risky side where selection of one cup will lead to a designated number of dollars gained (lost) and the other cups will lead to no gain (loss). The other side is identified as the certain side where only one cup and $1 are shown. To make the task easier to implement in the scanner, subjects were not asked to choose the specific cup in the risky option as in the previous study. Instead, the subjects were only asked to decide whether to risk or not by pressing the left or right button. After participants made the choice, the gamble was resolved immediately, allowing them to experience the consequence of the risky or safe choices. The cumulative consequences over trials determined their final monetary payoff. Participants were told that their actual earnings depended on their gains and losses on the cups task and thus were motivated to maximize earnings.

Figure 1.
The Cups task and the behavioral data. The Cups includes a Gain domain (A) and a Loss domain (B). Each trial consists of a safe option with $1 in one cup, and a risky option with a probability of 1/2–1/5 (as determined by the number of cups) of ...

MRI Procedure

A mixed design was used in this fMRI study. For each run, 2 blocks of Gain and 2 block of Loss domain trials were pseudo-randomly ordered, and counterbalanced across runs. Within each block, trials from the 7 combinations of probability and outcome and the null event (i.e., fixation, mean 3 s, ranged from 0.5 to 5 s) were presented in a designated order, specified using OPTSEQ (Dale 1999) to achieve better design efficiency. For each trial, the 2 options were simultaneously shown on the screen for 2.5 s, during which subjects were required to make a choice. Once a choice was made, feedback was presented for 0.5 s to indicate the amount of money they have won or lost. A happy or a frowning face was also presented to indicate win or loss, respectively. Subjects were asked to respond quickly (i.e., within 2.5 s) or they would get the worst possible outcome on that trial. In each block, there were 35 trials consisting of 5 repetitions of each of the 7 combinations. In total, each run included 140 trials that lasted 572 s. Accumulated gains were shown only in the end of each run. Two functional runs were collected for each subject. Subjects’ final pay was determined by the combined results from both runs.

Subjects lay supine on the scanner bed, and viewed visual stimuli back-projected onto a screen through a mirror attached onto the head coil. Foam pads were used to minimize head motion. In the beginning of each block, an instruction indicating whether it was a Gain or Loss block was shown to the subjects for 3 s. Stimulus presentation and timing of all stimuli and response events were achieved using Matlab (Mathworks, Natick, MA) and Psychtoolbox (www.psychtoolbox.org) on an IBM-compatible PC. Participants’ responses were collected online using a MRI-compatible button box.

MRI Data Acquisition

fMRI imaging was conducted in a 3T Siemens MAGNETOM Tim/Trio scanner in the Dana and David Dornsife Cognitive Neuroscience Imaging Center at the University of Southern California. Functional scanning used a z-shim gradient echo Echo Planar Imaging (EPI) sequence with PACE (prospective acquisition correction). This specific sequence is dedicated to reduce signal loss in the prefrontal and orbitofrontal areas. The PACE option can help to reduce the impact of head motion during data acquisition. The parameters are: time repetition (TR) = 2000 ms; time echo (TE) = 25 ms; flip angle = 90°; 64 × 64 matrix size with resolution 3 × 3 mm2. Thirty-one 3.5-mm axial slices were used to cover the whole cerebral and most of the cerebellum with no gap. The slices were tilted about 30° clockwise along the Anterior commissure - Posterior commissure (AC–PC) plane to obtain better signal in the orbitofrontal cortex. The anatomical T1-weighted structural scan was done using an magnetization prepared rapid gradient echo (MP-RAGE) sequence (Inversion Time [TI] = 800 ms; TR = 2530 ms; TE = 3.1 ms; flip angle 10; 208 sagittal slices; 256 × 256 matrix size with spatial resolution as 1 × 1 × 1 mm3).

Image Preprocessing and Statistical Analysis

Image preprocessing and statistical analysis were carried out using tools from the FMRIB software library (www.fmrib.ox.ac.uk/fsl). The first 4 volumes before the task were automatically discarded by the scanner to allow for T1 equilibrium. The remaining images were then realigned to compensate for small residual head movements that were not captured by the PACE sequence (Jenkinson and Smith 2001). Translational movement parameters never exceeded 1 voxel in any direction for any subject or session. All images were denoised using MELODIC independent components analysis within FSL (Tohka et al. 2008). Data were spatially smoothed using a 5-mm full-width-half-maximum Gaussian kernel. The data were filtered in the temporal domain using a nonlinear highpass filter with a 280-s cut-off. A 2-step registration procedure was used whereby EPI images were first registered to the MPRAGE structural image, and then transformed into the standard (Montreal Neurological Institute [MNI]) space, using a 12-parameter affine transformation (Jenkinson and Smith 2001). Statistical analyses were performed in the native image space, with the statistical maps normalized to the standard space prior to higher-level analysis.

A general linear model (GLM) was used to analyze the contributions of the different experimental factors to the blood oxygenation level–dependent (BOLD) responses using both parametric analysis and category analysis. The parametric analysis was used to quantitatively describe the relationship between brain activation and decision parameters. The following 5 parameters were generated for each trial and entered into a GLM model: the magnitude of the possible outcome of the risky choice (Mag), the probability (Prob, as determined by the number of cups), the relative EV of the risky option, the experienced reward and the experienced risk. The relative EV of the risky option is calculated by subtracting the EV of the safe choice from that of the risky choice ($1 or −$1 for the Win and Loss domain, respectively). Following existing literature (Holt and Laury 2002; Preuschoff et al. 2006; Tobler et al. 2007), risk in the present study is defined as the variance of the outcome, which is calculated using the following formula: Risk = [(1 − Prob) × (0 − EV)2 + Prob × (Mag − EV)2]. The experienced risk reflects the outcome uncertainty that is generated by making a risky choice, that is, if subjects choose to make a risky choice, they will experience the risk; if they choose not to risk, they will experience no risk. In order to examine the experienced risk, we multiply the decision risk parameter by the subject's choice (coded 1 for the risky choice and 0 for the sure-thing or riskless choice). Due to the factorial design and nature of the gambling task, the 2 parameters associated with experienced risk and experienced reward were largely orthogonal and the use of GLM allowed us to examine their unique contributions.

For the category analysis, the following events were modeled based on participants’ responses: RiskyLoss_Gain-domain, RiskyWin_Gain-domain, NoRisk_Gain-domain, RiskyLoss_Loss-domain, RiskyWin_Loss-domain, and NoRisk_Loss-domain, and nuisance events consisting of the 4 instructions. It should be noted that though subjects did not actually lose money in the RiskyLoss_Gain-domain condition, it was conceived as a loss as compared with the sure win of $1. Similarly, though subjects did not actually win money in the RiskyWin_Loss-domain condition, it was considered as a win as compared with the sure loss of $1 (Kim et al. 2006). For both analyses, each regressor was convolved with a canonical (double-gamma) hemodynamic response function. Temporal derivatives were included as covariates of no interest to improve statistical sensitivity. Null events were not explicitly modeled, and therefore constituted an implicit baseline.

A higher-level analysis created cross-run contrasts for each subject for a set of contrast images using a fixed effect model. These were then input into a random-effect model for group analysis using FMRIB's Local Analysis of Mixed Effect stage 1 only (Beckmann et al. 2003; Woolrich et al. 2004). Group images were thresholded using cluster detection statistics, with a height threshold of z > 2.3 and a cluster probability of P < 0.05, corrected for whole-brain multiple comparisons using Gaussian Random Field Theory (GRFT).

To examine correlations between neural activities in the medial frontal regions and behavioral risk preference across participants, voxelwise correlation was conducted for each of the 2 major contrasts. An uncorrected threshold of P < 0.001 was used for this analysis. The BOLD response amplitudes of the Dorsomedial Prefrontal Cortex (DMPFC) and VMPFC are entered input into a single regression model: Risk Preference = −a × DMPFC + b × VMPFC + e, (a > 0, b > 0).

Region-of-Interest Analyses

To examine whether regions identified in each contrast were also modulated by other factors, regions of interest (ROIs) were created from clusters of voxels with significant activation in the voxelwise analyses. Using these regions of interest, ROI analyses were performed by extracting parameter estimates (betas) of each event type from the fitted model and averaging across all voxels in the cluster for each subject. Percent signal changes were calculated using the following formula: [contrast image/(mean of run)] × ppheight × 100%, where ppheight is the peak height of the hemodynamic response versus the baseline level of activity (Mumford 2007). Correlations between behavioral and ROI data were based on Pearson product–moment correlations.

Results

Risk Preference Varied Across Participants

Overall, the relatively young, healthy subjects in the present study were appropriately sensitive to changes in EV (risk rate: RA > EQEV > RD, F2,24 = 46,97, P < 0.001) (Supplementary Fig. 1A). Though previous studies have applied economic models to estimate the risk preference parameters (e.g., value function parameters) (Hsu et al. 2005, 2006; Tom et al. 2007), in the present study the rate of risky choices under the EQEV condition provides a sensitive measure of an individual's risk preference when the environment favors neither risk-seeking nor risk-avoiding. Using this measure, rates of risky choice were found to vary significantly across individual participants within both the Gain domain (ranged from 0.22 to 0.88) and the Loss domain (ranged from 0.23 to 0.92) (Fig. 1C). Responses were then combined across domains to represent an individual's overall risk preference, which was correlated with the neural responses in the MPFC. It should be noted that the risk preferences based on EQEV trials were highly correlated with those calculated over all three types (RA, EQEV, and RD) of trials. (r = 0.95).

Although risk preference did not vary significantly across domains, interesting differences were revealed in the response time data. Responses took longer for RD trials than for RA trials (F2,24 = 11.49, P < 0.001), as well as for trials in the Loss domain than for trials in the Gain domain (F1,12 = 9.70, P = 0.009) (Supplementary Fig. 1A). There was a marginally significant EV by task domain interaction (F2,24 = 2.63, P = 0.093), reflecting significant Gain domain vs. Loss domain differences under the RD (P = 0.004) and EQEV conditions (P = 0.013), but not under the RA condition (P = 0.27). These results show that it required additional time to overcome the tendency to take a risk (e.g., when making decisions on RD trials) especially in the domain of losses.

Dorsal and Ventral MPFC were Differently Modulated by Experienced Risk Level and Outcome Magnitude

The parametric analysis revealed that the dorsal part of the MPFC (MNI coordinates [x, y, z]: 4, 48, 26, Z = 3.59), along with the thalamus (MNI: −2, −2, 4, Z = 3.29), was positively modulated by the experienced risk (Fig. 2 and Supplementary Table 1). The right parietal lobule (supramarginal gyrus and inferior parietal lobule) [MNI: 54, −42, 34, Z = 3.2] also showed this pattern, consistent with previous reports (Huettel et al. 2006; Preuschoff et al. 2006). In contrast, the ventral part of the MPFC (MNI: 0, 58, −6, Z = 3.74) was significantly modulated by the magnitude of the experienced outcome (Fig. 3 and Supplementary Table 2), consistent with its proposed role in processing more abstract reward (i.e., money) (O'Doherty et al. 2001, 2003; Bechara and Damasio 2005; Tom et al. 2007). Also consistent with previous studies, the posterior cingulate cortex (McCoy et al. 2003), and the NAcc bilaterally (Liu et al. 2007; Tom et al. 2007) were modulated by the experienced outcome. The left Orbitofrontal Cortex (OFC) (MNI: −34, −56, −14, Z = 3.65) also showed this pattern, consistent with the hypothesis that positive consequences are lateralized to the left side of the ventral PFC (Bechara and Damasio 2005) (see Supplementary Results for direct left vs. right OFC comparison).

Figure 2.
Brain regions sensitive to experienced risk. The risk is determined by the variance of outcome relative to the EV (A), and the participants’ choice (i.e., experienced risk was zero for safe choices). The dorsal MPFC (MNI: 4, 48, 26, Z = 3.59), ...
Figure 3.
Brain regions sensitive to experienced outcome. Regions (including ventral MPFC [MNI: 0, 58, −6, Z = 3.74], PCC [MNI: 0, −36, −26, Z = 3.66], left OFC [MNI: −34, −56, −14, Z = 3.65], left [MNI: −20, ...

In a parallel parametric analysis, we replaced the experienced risk regressor with the decision risk regressor (i.e., the variance of the outcomes without considering subjects’ choice). This analysis failed to reveal significant dorsal MPFC activation, suggesting that dorsal MPFC activation reflected only the experienced risk and not the decision risk.

To quantify the brain activation for each type of choice and outcome, we sorted the trials into 6 different categories according to the task domain (Gain vs. Loss), subjects’ choice (risky vs. safe) and the outcome (win vs. loss) (see Methods). Contrasting all the risky choices with the safe choices replicated the result from the parametric analysis in the dorsal MPFC, showing significantly stronger activation for risky choices than for safe choices (Fig. 4A; also see Supplementary Fig. 2 and Supplementary Table 3 for a complete list of foci in this contrast). Again replicating the parametric results, contrasting all the win trials with all the loss trials (here the safe choices in the Gain and Loss domain were treated as win and loss trials, respectively) revealed stronger activation in the ventral MPFC, which extended to the ventral part of the anterior cingulate cortex (ACC) (Fig. 4A; also see Supplementary Fig. 3 and Supplementary Table 4 for a complete list of foci in this contrast).

Figure 4.
fMRI results. (A) Risky > Safe choices (blue color), reflecting participants’ response to risk (i.e., “fear” of uncertainty), and win > loss outcome (red color), reflecting participants’ response to reward, ...

The functional dissociation between DMPFC and VMPFC was further confirmed by ROI analysis. Region by outcome by experienced risk 3-way repeated measures ANOVA revealed a significant region by outcome interaction (F1,12 = 9.59, P = 0.009) and a region by experienced risk interaction (F1,12 = 6.10, P = 0.029). Further analysis indicated that the DMPFC was merely modulated by experienced risk (F1,12 = 0.60, P = 0.007), but not by outcome valence (F1,12 = 0.001, P = 0.97) (Fig. 4B, also see Supplementary Fig. 4 for a similar pattern for the thalamus). In contrast, the VMPFC was only modulated by outcome valence (F1,12 = 23.61, P < 0.001), but not by risk (F1,12 = 0.107, P = 0.749) (Fig. 4C, also see Supplementary Fig. 5 for a similar pattern in the left lateral OFC, Posterior Cingulate Cortex (PCC), and left superior/middle frontal gyrus). The interaction between risk and outcome was not significant for either region (P = 0.622 and 0.228, respectively).

DMPFC and VMPFC Activation Correlated with Individual's Risk Preference

The results reported in the previous section suggest that the DMPFC conveys a risk signal whereas the VMPFC conveys a reward signal. We next examined the correlation between brain responses and individuals’ risk preference. In particular, we would expect that if the DMPFC and VMPFC were actually associated with individuals’ risk preference, then participants with stronger risk preference would show lower levels of DMPFC activation when making a risky choice than when making a riskless choice, compared with participants with lower risk preference. In contrast, subjects with stronger risk preference would show stronger levels of VMPFC activation when receiving a win than when receiving a loss, compared with participants with lower risk preference. Both predictions were confirmed by the correlational analysis. Individuals’ risk preference was negatively correlated with DMPFC activation in the risky vs. safe choices contrast (MNI: 4, 58, 18, Z = 3.70) (Fig. 5A,B; also see Supplementary Table 5 for a complete list of foci in this analysis), whereas it was positively correlated with the VMPFC activation in the win vs. loss contrast (MNI: −14, 50, −2, Z = 3.68) (Fig. 5C,D; also see Supplementary Table 6 for a complete list of foci in this analysis). Furthermore, in a regression analysis including both predictors (see Methods), both risk-related response in the DMPFC (t = −3.93, P = 0.003) and reward-related response in the VMPFC (t = 4.80, P = 0.001) made unique contributions to individuals’ risk preference, and a combination of reward-related response and risk-related response provided an excellent prediction of individuals’ risk behaviors (r2 = 0.907, P < 0.00001). These results provide compelling evidence that the dorsal and ventral MPFC carry different decision signals, both of which contribute to risky decision making and can account for individual differences in risk behavior.

Figure 5.
fMRI correlation of individual risk preference. Regions show significant negative correlations between risk preference and DMPFC activation (MNI: 4, 58, 18, Z = 3.6) in risky > safe choices (A), and positive correlation between risk preference ...

The NAcc was Modulated by Both Risk and Reward

In contrast to the MPFC, the NAcc has been previously shown to be sensitive to both risk and reward (Preuschoff et al. 2006). Consistently, ROI results indicated that the left (MNI: −10, 16, −6) and right NAcc (MNI: 8, 16, −4) both showed stronger activation for win trials than for loss trials (for both Gain and Loss domain) (left NAcc: F1,12 = 40.6, P < 0.0001; right NAcc: F1,12 = 18.28, P < 0.001), and, to a lesser extent, showed stronger activation for risky choices than for safe choices (left NAcc: F1,12 = 12.7, P = 0.004; right NAcc: F1,12 = 10.07, P = 0.008) (Fig. 6B,C). Neither of the interactions were significant (P = 0.35 and 0.14 respectively). This pattern of NAcc activation is in contrast to the adjacent ventral putamen, which is sensitive to reward but not risk (see Supplementary Results). The NAcc results are partially consistent with the prediction error (PE) interpretation of ventral striatum function (Schultz et al. 1997; Pagnoni et al. 2002; McClure et al. 2003; O'Doherty et al. 2004), in the sense that positive PE (risk and win) elicits the strongest activation. Nevertheless, the PE interpretation cannot explain the stronger activation for sure win than for sure loss, nor could it explain the stronger activation for risk and loss (negative PE) than for sure loss (in which PE interpretation would predict the opposite).

Figure 6.
fMRI results. (A) Win > loss outcome in the bilateral NAcc, shown on a coronal slice of the group mean structural image. All activations were thresholded using cluster detection statistics, with a height threshold of z > 2.3 and a cluster ...

Discussion

By combining high-resolution functional imaging technique and a decision task that was relatively simple, yet fully capable of capturing the core decision deficits in patients with MPFC lesions, our study revealed finer functional division of the subregions of the MPFC in risky decision making than was revealed in earlier studies. Specifically, our results are unique in showing that the dorsal part of the MPFC signals the risk, whereas the ventral part of the MPFC signals the value of an outcome, with differential tendencies to activate these 2 systems accounting for individual differences in how people make decisions involving risk and uncertainty.

Ventral MPFC and Reward Processing

Our results provide further functional delineation of the MPFC, with the dorsal and ventral parts of the MPFC carrying risk-related and reward-related decision signals, respectively. The BA10 comprises a significantly larger proportion of cerebral cortex in humans than in other species (Semendeferi et al. 2001). It can be further divided into the frontal pole (10p), and 2 other regions (10m and 10r) that occupy most of the VMPFC (Ongur et al. 2003). It has been proposed that information in the VMPFC is organized hierarchically along a concrete to abstract continuum, with the ventral posterior of the MPFC (BA25) primarily processing the basic and simple reinforcers, such as smell, gustation and pain, and the anterior ventral MPFC (10m and 10r) processing more abstract reward, like monetary reward (Rolls 2000, 2004; Kringelbach and Rolls 2004; Bechara and Damasio 2005). Consistent with several previous studies (O'Doherty et al. 2001; Knutson et al. 2003, 2005; O'Doherty et al. 2003; Kable and Glimcher 2007; Tom et al. 2007), our data show that the VMPFC (together with bilateral NAcc) responds linearly to the magnitude of reward.

Despite the general consensus on reward processing, the neural correlates of negative reward (i.e., loss or punishment) are less clear. Several studies have suggested that the lateral OFC, the anterior insular cortex, and the amygdala are more activated when experiencing loss relative to gain (O'Doherty et al. 2001; Trepel et al. 2005; Knutson et al. 2007; Liu et al. 2007), whereas other studies suggest that the VMPFC and NAcc encode both win and loss by tuning up and down the activations in the same regions (Tom et al. 2007). Our results are more consistent with the latter finding, which is also consistent with the lesion study that suggests that VMPFC lesion does impair both risky decisions to achieve gains and risky decisions to avoid losses (Weller et al. 2007). The left OFC cortex is activated by gain but not by loss, which can be viewed as consistent with the notion that positive consequences are lateralized to the left regions of the PFC (Bechara and Damasio 2005). Additional studies are needed to examine whether these divergent results could be attributed to different stages of loss processing (e.g., decision, expectation vs. experience), the availability of alternative outcomes, or perhaps whether a loss signals a switch in behavior.

Dorsal MPFC and Risk Processing

In contrast to the VMPFC, the dorsal MPFC (10p) is a supramodal cortex with a high density of dendritic spines and a low density of cell bodies, making it particularly suitable to integrate information from different regions (i.e., sensory cortices and limbic structures) (Ramnani and Owen 2004; Bechara and Damasio 2005). The dorsal MPFC has been implicated in processing internal states (Christoff and Gabrieli 2000), in episodic memory retrieval (in particularly the right hemisphere) (Tulving 1983), and in prospective memory (Burgess et al. 2001; Ramnani and Owen 2004). Meta-analyses indicate that the DMPFC (along with the ACC) plays a general role in emotion processing, including appraisal/evaluation of emotion, emotional regulation, and emotion-driven decision-making (Phan et al. 2002). According to the somatic marker hypothesis (Damasio 1994), the DMPFC is a structure that triggers somatic states from secondary inducers, which in our case, is the perception/computation of the uncertainty of the outcome from the risky choices. These anatomic and functional characteristics make the DMPFC particularly suitable to respond to the uncertainty which is generated by making a risky choice. Consistently, our study revealed that the DMPFC showed stronger response when participants were making the risky choices, as compared with the safe choices, and the degree of response was modulated by the experienced risk level, that is, the uncertainty of the outcome.

The involvement of the DMPFC in risk processing is consistent with several other lines of evidence. First, dorsal MPFC activation in risk processing has been found in several previous studies using the Iowa Gambling Task (IGT) (Bolla et al. 2003; Fukui et al. 2005; Tanabe et al. 2007). For example, the DMPFC showed stronger activation when subjects were choosing the risky decks than when they were choosing the safe decks (Fukui et al. 2005). In a modified IGT task, the choice of “play” (risky) elicited stronger activation in the dorsal MPFC than the choice of “pass” (safe) (Tanabe et al. 2007). Second, strong medial frontal gyrus activation, which partially overlaps with the DMPFC, has also been found when comparing the safe choices with the risky choices (Matthews et al. 2004), and when participants were anticipating the results after a risky choice (Critchley et al. 2001). Finally, Hsu and colleagues manipulated the level of uncertainty in terms of ambiguity (when the relevant decision information is missing) and risk (when the relevant decision information is available), and they found strong DMPFC (along with orbitofrontal and amygdala) activation in decisions under ambiguity relative to decisions under risk (Hsu et al. 2005) .

Beside the DMPFC, the involvement of NAcc and parietal lobule in response to experienced risk is also consistent with previous studies. For example, one study compared ambiguous vs. risky decision making and found the lateral prefrontal cortex activation (Ambiguity > Risk) was correlated with individuals’ ambiguity preference, whereas the posterior parietal cortex activation (Risk > Ambiguity) was correlated with individuals’ risk preference (Huettel et al. 2006). Other studies quantitatively manipulated the risk level in terms of reward variance (Preuschoff et al. 2006; Tobler et al. 2007). One study found that NAcc was modulated by both the expected reward and experienced risk (Preuschoff et al. 2006), whereas another study found that the lateral orbitofrontal lobe is associated with risk processing (Tobler et al. 2007). Moreover, one study has shown that the insula is involved in risk processing and its activation could predict a sure choice (i.e., bond) over a risky choice (i.e., stock) (Kuhnen and Knutson 2005).

Taken together, these studies demonstrate that decision making under uncertainty engages a complex neural system. However, it is not clear why the earlier studies failed to reveal DMPFC activation in risk processing. Particularly, previous functional MRI studies primarily focused on the decision stage by either eliminating the online delivery of outcome altogether, or by introducing a long (sometimes jittered) delay between the decision and the delivery of the outcome. Though both manipulations allow researchers to isolate the BOLD responses associated with the decision stage, the psychological and neural effects of these manipulations on decision making have not been well understood. Particularly, it is unknown whether MPFC patients would still be impaired in these altered decision-making tasks, and whether the observed activations can be proved to be functionally necessary.

In order to provide convergent evidence from both fMRI and lesion patients, the present study immediately resolved the gamble after subjects’ choice in an effort to keep our fMRI paradigm as comparable as possible to that used in the lesion study (Weller et al. 2007). In our analysis, we used a parametric approach and focused the analyses on how brain responses were modulated by different choices (risk vs. safe) and by different outcomes (win vs. lose). Our results suggest that although the dorsal MPFC does not respond to the decision risk, its activation is significantly modulated by subjects’ choices and by the experience of the risk (i.e., after making a risky choice). More importantly, the regression analysis revealed that the neural responses to experienced reward and experienced risk are not simply an epiphenomena or by-product of decision making, but indeed are also associated with an individual's risk behaviors. Thus they provide important information for our understanding of realistic decision-making processes and sources of individual differences in these processes.

Despite its obvious merits in terms of ecological validity and the capacity to integrate the fMRI findings with existing neurological results, the present paradigm (with no jitter between decision and feedback) made it difficult to completely separate the neural responses associated with the decision from those associated with feedback. Further functional imaging and lesion studies are definitely required to understand more completely how neural responses at different stages of decision making contribute differently to individuals’ decisions under risk.

Neurofunctional Indicators of Individual Differences in Risk Preference

Our results also suggest that both ventral and dorsal MPFC activation are predictive of risk behaviors (although in opposite directions). That is, a strong rewarding signal in the VMPFC could lead to risk-seeking behavior. Meanwhile, a strong response in the dorsal MPFC to risk is associated with less risky choices, suggesting that dorsal MPFC activation in risky behavior acts as a warning signal. Individuals with stronger DMPFC activation are more sensitive to risk, which would prevent them from making risky choices. Critically, our regression analysis shows that sensitivities to risk and sensitivity to reward are relatively independent of each other, and that each one makes a unique contribution to individuals’ risk behaviors. Thus, a combination of both brain measures could provide a better account of individual's risk preferences than either one alone.

In sum, our results suggest that decision making under risk depends on the balance of 2 competing forces: one is the “fear” or “anxiety” of uncertainty, and the other is the “lure” of gain. Together they will then determine whether the risk will be taken or avoided, whereas imbalances in these forces may lead to decisions that are overly guided by reward seeking or by risk aversion. Consistent with this, individuals with substance dependence problems often reveal abnormal VMPFC functions, and one aspect of the risk-seeking behaviors observed in these patients is hypersensitivity to reward (Bechara et al. 2002). In contrast, patients with DMPFC lesion would be completely unaffected by the risk signal, resulting in overall risk-seeking behaviors. Interestingly, substance abusers and pathological gamblers also show reduced DMPFC activation to risk, which might further increase their risk-seeking behaviors (Bolla et al. 2003; Tanabe et al. 2007). Our study thus provides a useful theoretical and methodological framework for additional studies to further elucidate the mechanisms underlying impaired risky decision making in a wide range of populations, including the developmental populations (distinct stages such as adolescents or the elderly) and different types of clinical disorders (e.g., drug abuser, phobia, anxiety disorder and brain lesion patients).

Conclusion

In studying the neural mechanisms of risky decision making, lesion and functional imaging studies have focused on the computation of overall value of the risky option and treated the MPFC as a single functional unit. The present study extends these studies by showing that risk and reward outcome are separately processed in the dorsal and ventral MPFC, which contribute differently to risky behaviors. Our study has both methodological and theoretical implications for understanding the mechanisms of MPFC in decision making. Methodologically, our study emphasizes the combination of functional imaging techniques, ecologically validated decision tasks and mathematical models in describing the functional anatomy of decision making. Theoretically, it suggests that risk aversion and reward seeking are 2 important decision forces that involve distinct cognitive and neural processes and they altogether determine individuals’ risk behaviors.

Supplementary Material

Supplementary material can be found at: http://www.cercor.oxfordjournals.org/

Funding

National Institute on Drug Abuse grants (DA11779, DA12487, and DA16708); and National Science Foundation Grant nos. (HD29891, IIS 04-42586 and SES 03-50984).

Supplementary Material

[Supplementary Data]

Acknowledgments

We wish to thank Dr Jiancheng Zhuang for help in fMRI data collection.

Conflict of Interest: None declared.

References

  • Bechara A. Decision making, impulse control and loss of willpower to resist drugs: a neurocognitive perspective. Nat Neurosci. 2005;8:1458–1463. [PubMed]
  • Bechara A, Damasio AR. The somatic marker hypothesis: a neural theory of economic decision. Games Econ Behav. 2005;52:336–372.
  • Bechara A, Dolan S, Hindes A. Decision-making and addiction (part II): myopia for the future or hypersensitivity to reward? Neuropsychologia. 2002;40:1690–1705. [PubMed]
  • Bechara A, Tranel D, Damasio H. Characterization of the decision-making deficit of patients with ventromedial prefrontal cortex lesions. Brain. 2000;123(Pt 11):2189–2202. [PubMed]
  • Beckmann CF, Jenkinson M, Smith SM. General multilevel linear modeling for group analysis in FMRI. Neuroimage. 2003;20:1052–1063. [PubMed]
  • Bolla KI, Eldreth DA, London ED, Kiehl KA, Mouratidis M, Contoreggi C, Matochik JA, Kurian V, Cadet JL, Kimes AS, et al. Orbitofrontal cortex dysfunction in abstinent cocaine abusers performing a decision-making task. Neuroimage. 2003;19:1085–1094. [PMC free article] [PubMed]
  • Burgess PW, Quayle A, Frith CD. Brain regions involved in prospective memory as determined by positron emission tomography. Neuropsychologia. 2001;39:545–555. [PubMed]
  • Christoff K, Gabrieli JDE. The frontopolar cortex and human cognition: evidence for a rostrocaudal hierarchical organization within the human prefrontal cortex. Psychobiology. 2000;28:168–186.
  • Critchley HD, Mathias CJ, Dolan RJ. Neural activity in the human brain relating to uncertainty and arousal during anticipation. Neuron. 2001;29:537–545. [PubMed]
  • Dale AM. Optimal experimental design for event-related fMRI. Hum Brain Mapp. 1999;8:109–114. [PubMed]
  • Damasio AR. Descartes’ error: emotion, reason, and the human brain. New York: Grosset and Putnam; 1994.
  • Delgado MR, Nystrom LE, Fissell C, Noll DC, Fiez JA. Tracking the hemodynamic responses to reward and punishment in the striatum. J Neurophysiol. 2000;84:3072–3077. [PubMed]
  • Elliott R, Friston KJ, Dolan RJ. Dissociable neural responses in human reward systems. J Neurosci. 2000;20:6159–6165. [PubMed]
  • Elliott R, Newman JL, Longe OA, Deakin JF. Differential response patterns in the striatum and orbitofrontal cortex to financial reward in humans: a parametric functional magnetic resonance imaging study. J Neurosci. 2003;23:303–307. [PubMed]
  • Ernst M, Nelson EE, McClure EB, Monk CS, Munson S, Eshel N, Zarahn E, Leibenluft E, Zametkin A, Towbin K, et al. Choice selection and reward anticipation: an fMRI study. Neuropsychologia. 2004;42:1585–1597. [PubMed]
  • Fellows LK, Farah MJ. Different underlying impairments in decision-making following ventromedial and dorsolateral frontal lobe damage in humans. Cereb Cortex. 2005;15:58–63. [PubMed]
  • Fellows LK, Farah MJ. The role of ventromedial prefrontal cortex in decision making: judgment under uncertainty or judgment per se? Cereb Cortex. 2007;17:2669–2674. [PubMed]
  • Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299:1898–1902. [PubMed]
  • Fiorillo CD, Tobler PN, Schultz W. Evidence that the delay-period activity of dopamine neurons corresponds to reward uncertainty rather than backpropagating TD errors. Behav Brain Funct. 2005;1:7. [PMC free article] [PubMed]
  • Fukui H, Murai T, Fukuyama H, Hayashi T, Hanakawa T. Functional activity related to risk anticipation during performance of the Iowa Gambling Task. Neuroimage. 2005;24:253–259. [PubMed]
  • Glimcher PW, Rustichini A. Neuroeconomics: the consilience of brain and decision. Science. 2004;306:447–452. [PubMed]
  • Holt CA, Laury SK. Risk aversion and incentive effects. Am Econ Rev. 2002;92:1644–1655.
  • Hsu M, Bhatt M, Adolphs R, Tranel D, Camerer CF. Neural systems responding to degrees of uncertainty in human decision-making. Science. 2005;310:1680–1683. [PubMed]
  • Huettel SA. Behavioral, but not reward, risk modulates activation of prefrontal, parietal, and insular cortices. Cognit Affect Behav Neurosci. 2006;6:141–151. [PubMed]
  • Huettel SA, Stowe CJ, Gordon EM, Warner BT, Platt ML. Neural signatures of economic preferences for risk and ambiguity. Neuron. 2006;49:765–775. [PubMed]
  • Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Med Image Anal. 2001;5:143–156. [PubMed]
  • Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nat Neurosci. 2007;10:1625–1633. [PMC free article] [PubMed]
  • Kim H, Shimojo S, O'Doherty JP. Is avoiding an aversive outcome rewarding? Neural substrates of avoidance learning in the human brain. PLoS Biol. 2006;4:e233. [PMC free article] [PubMed]
  • Knutson B, Fong GW, Bennett SM, Adams CM, Homme D. A region of mesial prefrontal cortex tracks monetarily rewarding outcomes: characterization with rapid event-related fMRI. Neuroimage. 2003;18:263–272. [PubMed]
  • Knutson B, Rick S, Wimmer GE, Prelec D, Loewenstein G. Neural predictors of purchases. Neuron. 2007;53:147–156. [PMC free article] [PubMed]
  • Knutson B, Taylor J, Kaufman M, Peterson R, Glover G. Distributed neural representation of expected value. J Neurosci. 2005;25:4806–4812. [PubMed]
  • Kringelbach ML, Rolls ET. The functional neuroanatomy of the human orbitofrontal cortex: evidence from neuroimaging and neuropsychology. Prog Neurobiol. 2004;72:341–372. [PubMed]
  • Kuhnen CM, Knutson B. The neural basis of financial risk taking. Neuron. 2005;47:763–770. [PubMed]
  • Liu X, Powell DK, Wang H, Gold BT, Corbly CR, Joseph JE. Functional dissociation in frontal and striatal areas for processing of positive and negative reward information. J Neurosci. 2007;27:4587–4597. [PubMed]
  • Matthews SC, Simmons AN, Lane SD, Paulus MP. Selective activation of the nucleus accumbens during risk-taking decision making. Neuroreport. 2004;15:2123–2127. [PubMed]
  • McClure SM, Berns GS, Montague PR. Temporal prediction errors in a passive learning task activate human striatum. Neuron. 2003;38:339–346. [PubMed]
  • McCoy AN, Crowley JC, Haghighian G, Dean HL, Platt ML. Saccade reward signals in posterior cingulate cortex. Neuron. 2003;40:1031–1040. [PubMed]
  • McCoy AN, Platt ML. Risk-sensitive neurons in macaque posterior cingulate cortex. Nat Neurosci. 2005;8:1220–1227. [PubMed]
  • Mumford J. 2007. A guide to calculating percent change with featquery. Unpublished Tech Report. Available at: http://mumford.bol.ucla.edu/perchange_guide.pdf.
  • O'Doherty J, Critchley H, Deichmann R, Dolan RJ. Dissociating valence of outcome from behavioral control in human orbital and ventral prefrontal cortices. J Neurosci. 2003;23:7931–7939. [PubMed]
  • O'Doherty J, Dayan P, Schultz J, Deichmann R, Friston K, Dolan RJ. Dissociable Roles of ventral and dorsal striatum in instrumental conditioning. Science. 2004;304:452–454. [PubMed]
  • O'Doherty J, Kringelbach ML, Rolls ET, Hornak J, Andrews C. Abstract reward and punishment representations in the human orbitofrontal cortex. Nat Neurosci. 2001;4:95–102. [PubMed]
  • Ongur D, Ferry AT, Price JL. Architectonic subdivision of the human orbital and medial prefrontal cortex. J Comp Neurol. 2003;460:425–449. [PubMed]
  • Pagnoni G, Zink CF, Montague PR, Berns GS. Activity in human ventral striatum locked to errors of reward prediction. Nat Neurosci. 2002;5:97–98. [PubMed]
  • Phan KL, Wager T, Taylor SF, Liberzon I. Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. Neuroimage. 2002;16:331–348. [PubMed]
  • Preuschoff K, Bossaerts P, Quartz SR. Neural differentiation of expected reward and risk in human subcortical structures. Neuron. 2006;51:381–390. [PubMed]
  • Ramnani N, Owen AM. Anterior prefrontal cortex: insights into function from anatomy and neuroimaging. Nat Rev Neurosci. 2004;5:184–194. [PubMed]
  • Rolls ET. The orbitofrontal cortex and reward. Cereb Cortex. 2000;10:284–294. [PubMed]
  • Rolls ET. The functions of the orbitofrontal cortex. Brain Cogn. 2004;55:11–29. [PubMed]
  • Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593. [PubMed]
  • Semendeferi K, Armstrong E, Schleicher A, Zilles K, Van Hoesen WG. Prefrontal cortex in humans and apes: a comparative study of area 10. Am J Phys Anthropol. 2001;114:224–241. [PubMed]
  • Spiro JE. Reward and punishment in orbitofrontal cortex. Nat Neurosci. 2001;4:12. [PubMed]
  • Tanabe J, Thompson L, Claus E, Dalwani M, Hutchison K, Banich MT. Prefrontal cortex activity is reduced in gambling and nongambling substance users during decision-making. Hum Brain Mapp. 2007;28:1276–1286. [PubMed]
  • Tobler PN, O'Doherty JP, Dolan RJ, Schultz W. Reward value coding distinct from risk attitude-related uncertainty coding in human reward systems. J Neurophysiol. 2007;97:1621–1632. [PMC free article] [PubMed]
  • Tohka J, Foerde K, Aron AR, Tom SM, Toga AW, Poldrack RA. Automatic independent component labeling for artifact removal in fMRI. Neuroimage. 2008;39:1227–1245. [PMC free article] [PubMed]
  • Tom SM, Fox CR, Trepel C, Poldrack RA. The neural basis of loss aversion in decision-making under risk. Science. 2007;315:515–518. [PubMed]
  • Trepel C, Fox CR, Poldrack RA. Prospect theory on the brain? Toward a cognitive neuroscience of decision under risk. Brain Res Cogn Brain Res. 2005;23:34–50. [PubMed]
  • Tulving E. Elements of episodic memory. Oxford: Clarendon; 1983.
  • Weller JA, Levin IP, Shiv B, Bechara A. Neural correlates of adaptive decision making for risky gains and losses. Psychol Sci. 2007;18:958–964. [PubMed]
  • Woolrich MW, Behrens TE, Beckmann CF, Jenkinson M, Smith SM. Multilevel linear modelling for FMRI group analysis using Bayesian inference. Neuroimage. 2004;21:1732–1747. [PubMed]

Articles from Cerebral Cortex (New York, NY) are provided here courtesy of Oxford University Press