Subjective values of actions are influenced by the uncertainty and immediacy of expected rewards. Multiple brain areas, including the prefrontal cortex and basal ganglia, are implicated in selecting actions according to their subjective values. Alterations in these neural circuits therefore might contribute to symptoms of impulsive choice behaviors in disorders such as substance abuse and attention-deficit hyperactivity disorder (ADHD). In particular, the α-2A noradrenergic system is known to have a key influence on prefrontal cortical circuits, and medications that stimulate this receptor are currently in use for the treatment of ADHD.
We tested whether the preference of rhesus monkeys for delayed and uncertain reward is influenced by the α-2A adrenergic receptor agonist, guanfacine.
In each trial, the animal chose between a small, certain and immediate reward and another larger, more delayed reward. In half of the trials, the larger reward was certain, whereas in the remaining trials, the larger reward was uncertain.
Guanfacine increased the tendency for the animal to choose the larger and more delayed reward only when it was certain. By applying an econometric model to the animal’s choice behavior, we found that guanfacine selectively reduced the animal’s time preference, increasing their choice of delayed, larger rewards, without significantly affecting their risk preference.
In combination with previous findings that guanfacine improves the efficiency of working memory and other prefrontal functions, these results suggest that impulsive choice behaviors may also be ameliorated by strengthening prefrontal functions.
temporal discounting; intertemporal choice; reward; decision making; neuroeconomics; prefrontal cortex; gambling; impulsivity; guanfacine; ADHD
Behavioral changes driven by reinforcement and punishment are referred to as simple or model-free reinforcement learning. Animals can also change their behaviors by observing events that are neither appetitive nor aversive, when these events provide new information about payoffs available from alternative actions. This is an example of model-based reinforcement learning, and can be accomplished by incorporating hypothetical reward signals into the value functions for specific actions. Recent neuroimaging and single-neuron recording studies showed that the prefrontal cortex and the striatum are involved not only in reinforcement and punishment, but also in model-based reinforcement learning. We found evidence for both types of learning, and hence hybrid learning, in monkeys during simulated competitive games. In addition, in both the dorsolateral prefrontal cortex and orbitofrontal cortex, individual neurons heterogeneously encoded signals related to actual and hypothetical outcomes from specific actions, suggesting that both areas might contribute to hybrid learning.
belief learning; decision making; game theory; reinforcement learning; reward
Limb movement is smooth and corrections of movement trajectory and amplitude are barely noticeable midflight. This suggests that skeletomuscular motor commands are smooth in transition, such that the rate of change of acceleration (or jerk) is minimized. Here we applied the methodology of minimum-jerk submovement decomposition to a member of the skeletomuscular family, the head movement. We examined the submovement composition of three types of horizontal head movements generated by nonhuman primates: head-alone tracking, head-gaze pursuit, and eye-head combined gaze shifts. The first two types of head movements tracked a moving target, whereas the last type oriented the head with rapid gaze shifts toward a target fixed in space. During head tracking, the head movement was composed of a series of episodes, each consisting of a distinct, bell-shaped velocity profile (submovement) that rarely overlapped with each other. There was no specific magnitude order in the peak velocities of these submovements. In contrast, during eye-head combined gaze shifts, the head movement was often comprised of overlapping submovements, in which the peak velocity of the primary submovement was always higher than that of the subsequent submovement, consistent with the two-component strategy observed in goal-directed limb movements. These results extend the previous submovement composition studies from limb to head movements, suggesting that submovement composition provides a biologically plausible approach to characterizing the head motor recruitment that can vary depending on task demand.
Impulsivity refers to a set of heterogeneous behaviors that are tuned suboptimally along certain temporal dimensions. Impulsive inter-temporal choice refers to the tendency to forego a large but delayed reward and to seek an inferior but more immediate reward, whereas impulsive motor responses also result when the subjects fail to suppress inappropriate automatic behaviors. In addition, impulsive actions can be produced when too much emphasis is placed on speed rather than accuracy in a wide range of behaviors, including perceptual decision making. Despite this heterogeneous nature, the prefrontal cortex and its connected areas, such as the basal ganglia, play an important role in gating impulsive actions in a variety of behavioral tasks. Here, we describe key features of computations necessary for optimal decision making, and how their failures can lead to impulsive behaviors. We also review the recent findings from neuroimaging and single-neuron recording studies on the neural mechanisms related to impulsive behaviors. Converging approaches in economics, psychology, and neuroscience provide a unique vista for better understanding the nature of behavioral impairments associated with impulsivity.
intertemporal choice; temporal discounting; basal ganglia; speed-accuracy tradeoff; response inhibition; switching
Knowledge about hypothetical outcomes from unchosen actions is beneficial only when such outcomes can be correctly attributed to specific actions. Here, we show that during a simulated rock-paper-scissors game, rhesus monkeys can adjust their choice behaviors according to both actual and hypothetical outcomes from their chosen and unchosen actions, respectively. In addition, neurons in both dorsolateral prefrontal cortex and orbitofrontal cortex encoded the signals related to actual and hypothetical outcomes immediately after they were revealed to the animal. Moreover, compared to the neurons in the orbitofrontal cortex, those in the dorsolateral prefrontal cortex were more likely to change their activity according to the hypothetical outcomes from specific actions. Conjunctive and parallel coding of multiple actions and their outcomes in the prefrontal cortex might enhance the efficiency of reinforcement learning and also contribute to their context-dependent memory.
Despite widespread neural activity related to reward values, signals related to upcoming choice have not been clearly identified in the rodent brain. Here, we examined neuronal activity in the lateral (AGl) and medial (AGm) agranular cortex, corresponding to the primary and secondary motor cortex, respectively, in rats performing a dynamic foraging task. Choice signals arose in the AGm before behavioral manifestation of the animal’s choice earlier than in any other areas of the rat brain previously studied under free-choice conditions. The AGm also conveyed significant neural signals for decision value and chosen value. In contrast, upcoming choice signals arose later and value signals were weaker in the AGl. We also found that AGm lesions made the animal’s choices less dependent on dynamically updated values. These results suggest that rodent secondary motor cortex might be uniquely involved in both representing and reading out value signals for flexible action selection.
Many of the cognitive deficits of normal aging (forgetfulness, distractibility, inflexibility, and impaired executive functions) involve prefrontal cortical (PFC) dysfunction1–4. The PFC guides behavior and thought using working memory5, essential functions in the Information Age. Many PFC neurons hold information in working memory through excitatory networks that can maintain persistent neuronal firing in the absence of external stimulation6. This fragile process is highly dependent on the neurochemical environment7. For example, elevated cAMP signaling reduces persistent firing by opening HCN and KCNQ potassium channels8,9. It is not known if molecular changes associated with normal aging alter the physiological properties of PFC neurons during working memory, as there have been no in vivo recordings from PFC neurons of aged monkeys. Here we characterize the first recordings of this kind, revealing a marked loss of PFC persistent firing with advancing age that can be rescued by restoring an optimal neurochemical environment. Recordings showed an age-related decline in the firing rate of DELAY neurons, while the firing of CUE neurons remained unchanged with age. The memory-related firing of aged DELAY neurons was partially restored to more youthful levels by inhibiting cAMP signaling, or by blocking HCN or KCNQ channels. These findings reveal the cellular basis of age-related cognitive decline in dorsolateral PFC, and demonstrate that physiological integrity can be rescued by addressing the molecular needs of PFC circuits.
prefrontal cortex; working memory; aging; cAMP signaling; HCN channels; KCNQ channels; α2A adrenoceptors
In choosing between different rewards expected after unequal delays, humans and animals often prefer the smaller but more immediate reward, indicating that the subjective value or utility of reward is depreciated according to its delay. Here, we show that the neurons in the primate caudate nucleus and ventral striatum modulate their activity according to temporally discounted values of rewards with a similar time course. However, neurons in the caudate nucleus encoded the difference in the temporally discounted values of the two alternative targets more reliably than the neurons in the ventral striatum. In contrast, the neurons in the ventral striatum largely encoded the sum of the temporally discounted values, and therefore, the overall goodness of available options. These results suggest a more pivotal role for the dorsal striatum in action selection during intertemporal choice.
The value of an object acquired by a particular action often determines the motivation to produce that action. Previous studies found neural signals related to the values of different objects or goods in the orbitofrontal cortex, while the values of outcomes expected from different actions are broadly represented in multiple brain areas implicated in movement planning. However, how the brain combines the values associated with various objects and the information about their locations is not known. In this study, we tested whether the neurons in the dorsolateral prefrontal cortex (DLPFC) and striatum in rhesus monkeys might contribute to translating the value signals between multiple frames of reference. Monkeys were trained to perform an oculomotor intertemporal choice in which the color of a saccade target and the number of its surrounding dots signaled the magnitude of reward and its delay, respectively. In both DLPFC and striatum, temporally discounted values (DVs) associated with specific target colors and locations were encoded by partially overlapping populations of neurons. In the DLPFC, the information about reward delays and DVs of rewards available from specific target locations emerged earlier than the corresponding signals for target colors. Similar results were reproduced by a simple network model built to compute DVs of rewards in different locations. Therefore, DLPFC might play an important role in estimating the values of different actions by combining the previously learned values of objects and their present locations.
intertemporal choice; prefrontal cortex; reward; temporal discounting; utility
According to reinforcement learning theory of decision making, reward expectation is computed by integrating past rewards with a fixed timescale. By contrast, we found that a wide range of time constants is available across cortical neurons recorded from monkeys performing a competitive game task. By recognizing that reward modulates neural activity multiplicatively, we found that one or two time constants of reward memory can be extracted for each neuron in prefrontal, cingulate, and parietal cortex. These timescales ranged from hundreds of milliseconds to tens of seconds, according to a power-law distribution, which is consistent across areas and reproduced by a “reservoir” neural network model. These neuronal memory timescales were weakly but significantly correlated with those of monkey's decisions. Our findings suggest a flexible memory system, where neural subpopulations with distinct sets of long or short memory timescales may be selectively deployed according to the task demands.
We investigated how different sub-regions of rodent prefrontal cortex contribute to value-based decision making, by comparing neural signals related to animal’s choice, its outcome, and action value in orbitofrontal cortex (OFC) and medial prefrontal cortex (mPFC) of rats performing a dynamic two-armed bandit task. Neural signals for upcoming action selection arose in the mPFC, including the anterior cingulate cortex, only immediately before the behavioral manifestation of animal’s choice, suggesting that rodent prefrontal cortex is not involved in advanced action planning. Both OFC and mPFC conveyed signals related to the animal’s past choices and their outcomes over multiple trials, but neural signals for chosen value and reward prediction error were more prevalent in the OFC. Our results suggest that rodent OFC and mPFC serve distinct roles in value-based decision making, and that the OFC plays a prominent role in updating the values of outcomes expected from chosen actions.
Since its first discovery in the prefrontal cortex, persistent activity during the interval between a transient sensory stimulus and a subsequent behavioral response has been identified in many cortical and subcortical areas. Such persistent activity is thought to reflect the maintenance of working memory representations that bridge past events with future contingent plans. Indeed, the term persistent activity is sometimes used interchangeably with working memory. In this review, we argue that persistent activity observed broadly across many cortical and subcortical areas reflects not only working memory maintenance, but also a variety of other cognitive processes, including perceptual and reward-based decision making.
Humans and animals often must choose between rewards that differ in their qualities, magnitudes, immediacy, and likelihood, and must estimate these multiple reward parameters from their experience. However, the neural basis for such complex decision making is not well understood. To understand the role of the primate prefrontal cortex in determining the subjective value of delayed or uncertain reward, we examined the activity of individual prefrontal neurons during an inter-temporal choice task and a computer-simulated competitive game. Consistent with the findings from previous studies in humans and other animals, the monkey’s behaviors during inter-temporal choice were well accounted for by a hyperbolic discount function. In addition, the activity of many neurons in the lateral prefrontal cortex reflected the signals related to the magnitude and delay of the reward expected from a particular action, and often encoded the difference in temporally discounted values that predicted the animal’s choice. During a computerized matching pennies game, the animals approximated the optimal strategy, known as Nash equilibrium, using a reinforcement learning algorithm. We also found that many neurons in the lateral prefrontal cortex conveyed the signals related to the animal’s previous choices and their outcomes, suggesting that this cortical area might play an important role in forming associations between actions and their outcomes. These results show that the primate lateral prefrontal cortex plays a central role in estimating the values of alternative actions based on multiple sources of information.
game theory; inter-temporal choice; reinforcement learning; utility theory; temporal discounting
Game theory analyses optimal strategies for multiple decision makers interacting in a social group. However, the behaviours of individual humans and animals often deviate systematically from the optimal strategies described by game theory. The behaviours of rhesus monkeys (Macaca mulatta) in simple zero-sum games showed similar patterns, but their departures from the optimal strategies were well accounted for by a simple reinforcement-learning algorithm. During a computer-simulated zero-sum game, neurons in the dorsolateral prefrontal cortex often encoded the previous choices of the animal and its opponent as well as the animal's reward history. By contrast, the neurons in the anterior cingulate cortex predominantly encoded the animal's reward history. Using simple competitive games, therefore, we have demonstrated functional specialization between different areas of the primate frontal cortex involved in outcome monitoring and action selection. Temporally extended signals related to the animal's previous choices might facilitate the association between choices and their delayed outcomes, whereas information about the choices of the opponent might be used to estimate the reward expected from a particular action. Finally, signals related to the reward history might be used to monitor the overall success of the animal's current decision-making strategy.
prefrontal cortex; decision making; reward
Activity of the neurons in the lateral intra-parietal cortex or LIP displays a mixture of sensory, motor, and memory signals. Moreover, they often encode signals reflecting the accumulation of sensory evidence that certain eye movements might lead to a desirable outcome. However, when the environment changes dynamically, animals are also required to combine the information about its previously chosen actions and their outcomes appropriately to update continually the desirabilities of alternative actions. Here, we investigated whether LIP neurons encoded signals necessary to update an animal’s decision making strategies adaptively during a computer-simulated matching pennies game. Using a reinforcement learning algorithm, we estimated the value functions that best predicted the animal’s choices on a trial-by-trial basis. We found that immediately before the animal revealed its choice, approximately 18% of LIP neurons changed their activity according to the difference in the value functions for the two targets. In addition, a somewhat higher fraction of LIP neurons displayed signals related to the sum of the value function, which might correspond to the state value function or an average rate of reward used as a reference point. Similar to the neurons in the prefrontal cortex, many LIP neurons also encoded the signals related to the animal’s previous choices. Thus, the posterior parietal cortex might be a part of the network that provides the substrate for forming appropriate associations between actions and outcomes.
decision; parietal; reward; feedback; control; cognition
Human behaviors can be more powerfully influenced by conditioned reinforcers, such as money, than by primary reinforcers. Moreover, people often change their behaviors to avoid monetary losses. However, the effect of removing conditioned reinforcers on choices has not been explored in animals, and the neural mechanisms mediating the behavioral effects of gains and losses are not well understood. To investigate the behavioral and neural effects of gaining and losing a conditioned reinforcer, we trained rhesus monkeys for a matching pennies task in which the positive and negative values of its payoff matrix were realized by the delivery and removal of a conditioned reinforcer. Consistent with the findings previously obtained with non-negative payoffs and primary rewards, the animal’s choice behavior during this task was nearly optimal. Nevertheless, the gain and loss of a conditioned reinforcer significantly increased and decreased, respectively, the tendency for the animal to choose the same target in subsequent trials. We also found that the neurons in the dorsomedial frontal cortex, dorsal anterior cingulate cortex, and dorsolateral prefrontal cortex often changed their activity according to whether the animal earned or lost a conditioned reinforcer in the current or previous trial. Moreover, many neurons in the dorsomedial frontal cortex also signaled the gain or loss occurring as a result of choosing a particular action as well as changes in the animal’s behaviors resulting from such gains or losses. Thus, primate medial frontal cortex might mediate the behavioral effects of conditioned reinforcers and their losses.
cingulate cortex; decision making; prefrontal cortex; reinforcement learning; reward; punishment; neuroeconomics
Decision-makers often face choices whose consequences unfold over time. To explore the neural basis of such inter-temporal choice behavior, we devised a novel two-alternative choice task with probabilistic reward delivery and contrasted two conditions that differed only in whether the outcome was revealed immediately or after some delay. In the immediate condition, we simply varied the reward probability of each option and the outcome was revealed immediately. In the delay condition, the outcome was revealed after a delay during which the reward probability was governed by a constant hazard rate. Functional imaging revealed a set of brain regions, such as the posterior cingulate cortex, parahippacampal gyri, and frontal pole, that exhibited activity uniquely associated with the temporal aspects of the task. This engagement of the so-called “default network” suggests that during inter-temporal choice, decision-makers simulate the impending delay via a process of prospection.
decision; fMRI; inter-temporal choice; prospection; discounting; temporal resolution of uncertainty
Reward from a particular action is seldom immediate, and the influence of such delayed outcome on choice decreases with delay. It has been postulated that when faced with immediate and delayed rewards, decision makers choose the option with maximum temporally discounted value. We examined the preference of monkeys for delayed reward in a novel inter-temporal choice task and the neural basis for real-time computation of temporally discounted values in the dorsolateral prefrontal cortex. During this task, the locations of the targets associated with small and large rewards and their corresponding delays were randomly varied. We found that prefrontal neurons often encoded the temporally discounted value of reward expected from a particular option. Furthermore, activity tended to increase with discounted values for targets presented in the neuron's preferred direction, suggesting that activity related to temporally discounted values in the prefrontal cortex might determine the animal's behavior during inter-temporal choice.
Humans and animals are more likely to take an action leading to an immediate reward than actions with delayed rewards of similar magnitudes. Although such devaluation of delayed rewards has been almost universally described by hyperbolic discount functions, the rate of this temporal discounting varies substantially among different animal species. This might be in part due to the differences in how the information about reward is presented to decision makers. In previous animal studies, reward delays or magnitudes were gradually adjusted across trials, so the animals learned the properties of future rewards from the rewards they waited for and consumed previously. In contrast, verbal cues have been used commonly in human studies. In the present study, rhesus monkeys were trained in a novel inter-temporal choice task in which the magnitude and delay of reward were indicated symbolically using visual cues and varied randomly across trials. We found that monkeys could extract the information about reward delays from visual symbols regardless of the number of symbols used to indicate the delay. The rate of temporal discounting observed in the present study was comparable to the previous estimates in other mammals, and the animal's choice behavior was largely consistent with hyperbolic discounting. Our results also suggest that the rate of temporal discounting might be influenced by contextual factors, such as the novelty of the task. The flexibility furnished by this new inter-temporal choice task might be useful for future neurobiological investigations on inter-temporal choice in non-human primates.
reward; neuroeconomics; decision making; prefrontal cortex
To maximize reward and minimize effort, animals must often execute multiple movements in a timely and orderly manner. Such movement sequences must be usually discovered through experience, and during this process, signals related to the animal’s action, its ordinal position in the sequence, and subsequent reward need to be properly integrated. To investigate the role of the primate medial frontal cortex in planning and controlling multiple movements, monkeys were trained to produce a series of hand movements instructed by visual stimuli. We manipulated the number of movements in a sequence across trials, making it possible to dissociate the effects of the ordinal position of a given movement and the number of remaining movements necessary to obtain reward. Neurons in the supplementary and pre-supplementary motor areas modulated their activity according to the number of remaining movements, more often than in relation to the ordinal position, suggesting that they might encode signals related to the timing of reward or its temporally discounted value. In both cortical areas, signals related to the number of remaining movements and those related to movement direction were often combined multiplicatively, suggesting that the gain of the signals related to movements might be modulated by motivational factors. Finally, compared to the supplementary motor area, neurons in the pre-supplementary motor area were more likely to increase their activity when the number of remaining movements is large. These results suggest that these two areas might play complementary roles in controlling movement sequences.
decision making; directional tuning; gain modulation; ordinal position; reinforcement learning; reward; sequence learning; temporal discounting
Decision making in a social group displays two unique features. First, humans and other animals routinely alter their behaviors in response to changes in their physical and social environment. As a result, the outcomes of decisions that depend on the behaviors of multiple decision makers are difficult to predict, and this requires highly adaptive decision-making strategies. Second, decision makers may have other-regarding preferences and therefore choose their actions to improve or reduce the well-beings of others. Recently, many neurobiological studies have exploited game theory to probe the neural basis of decision making, and found that these unique features of social decision making might be reflected in the functions of brain areas involved in reward evaluation and reinforcement learning. Molecular genetic studies have also begun to identify genetic mechanisms for personal traits related to reinforcement learning and complex social decision making, further illuminating the biological basis of social behavior.
The process of decision making in humans and other animals is adaptive and can be tuned through experience so as to optimize the outcomes of their choices in a dynamic environment. Previous studies have demonstrated that the anterior cingulate cortex plays an important role in updating the animal’s behavioral strategies when the action-outcome contingencies change. Moreover, neurons in the anterior cingulate cortex often encode the signals related to expected or actual reward. We investigated whether reward-related activity in the anterior cingulate cortex is affected by the animal’s previous reward history. This was tested in rhesus monkeys trained to make binary choices in a computer-simulated competitive zero-sum game. The animal’s choice behavior was relatively close to the optimal strategy, but also revealed small but systematic biases that are consistent with the use of a reinforcement learning algorithm. In addition, the activity of neurons in the dorsal anterior cingulate cortex that was related to the reward received by the animal in a given trial was often modulated by the rewards in the previous trials. Some of these neurons encoded the rate of rewards in previous trials, whereas others displayed activity modulations more closely related to the reward prediction errors. By contrast, signals related to the animal’s choices were only weakly represented in this cortical area. These results suggest that neurons in the dorsal anterior cingulate cortex might be involved in the subjective evaluation of choice outcomes based on the animal’s reward history.
reinforcement learning; game theory; neuroeconomics; decision making; dopamine
Economic theories of decision making are based on the principle of utility maximization, and reinforcement learning theory provides computational algorithms that can be used to estimate the overall reward expected from alternative choices. These formal models not only account for a large range of behavioral observations in human and animal decision makers, but also provide useful tools for investigating the neural basis of decision making. Nevertheless, in reality, decision makers must combine different types of information about the costs and benefits associated with each available option, such as the quality and quantity of expected reward and required work. In this article, we put forward a hypothesis that different subdivisions of the primate frontal cortex may be specialized to focus on different aspects of dynamic decision-making processes. In this hypothesis, the lateral prefrontal cortex is primarily involved in maintaining the state representation necessary to identify optimal actions in a given environment. By contrast, the orbitofrontal cortex and the anterior cingulate cortex might be largely involved in encoding and updating the utilities associated with different sensory stimuli and alternative actions, respectively. These cortical areas are also likely to contribute to decision making in a social context.
reinforcement learning; reward; cingulate cortex; prefrontal cortex; orbitofrontal cortex; neuroeconomics
Previous studies have shown that non-human primates can generate highly stochastic choice behavior, especially when this is required during a competitive interaction with another agent. To understand the neural mechanism of such dynamic choice behavior, we propose a biologically plausible model of decision making endowed with synaptic plasticity that follows a reward-dependent stochastic Hebbian learning rule. This model constitutes a biophysical implementation of reinforcement learning, and it reproduces salient features of behavioral data from an experiment with monkeys playing a matching pennies game. Due to interaction with an opponent and learning dynamics, the model generates quasi-random behavior robustly in spite of intrinsic biases. Furthermore, non-random choice behavior can also emerge when the model plays against a non-interactive opponent, as observed in the monkey experiment. Finally, when combined with a meta-learning algorithm, our model accounts for the slow drift in the animal’s strategy based on a process of reward maximization.
Decision making; Reward-dependent stochastic Hebbian learning rule; Reinforcement learning; Meta-learning; Synaptic plasticity; Game theory