Understanding the interactions of visual and proprioceptive information in tool use is important as it is the basis for learning of the tool's kinematic transformation and thus skilled performance. This study investigated how the CNS combines seen cursor positions and felt hand positions under a visuo-motor rotation paradigm. Young and older adult participants performed aiming movements on a digitizer while looking at rotated visual feedback on a monitor. After each movement, they judged either the proprioceptively sensed hand direction or the visually sensed cursor direction. We identified asymmetric mutual biases with a strong visual dominance. Furthermore, we found a number of differences between explicit and implicit judgments of hand directions. The explicit judgments had considerably larger variability than the implicit judgments. The bias toward the cursor direction for the explicit judgments was about twice as strong as for the implicit judgments. The individual biases of explicit and implicit judgments were uncorrelated. Biases of these judgments exhibited opposite sequential effects. Moreover, age-related changes were also different between these judgments. The judgment variability was decreased and the bias toward the cursor direction was increased with increasing age only for the explicit judgments. These results indicate distinct explicit and implicit neural representations of hand direction, similar to the notion of distinct visual systems.
We compared 24-month-old children’s learning when their exposure to words came either in an interactive (coupled) context or in a nonsocial (decoupled) context. We measured the children’s learning with two different methods: one in which they were asked to point to the referent for the experimenter, and the other a preferential looking task in which they were encouraged to look to the referent. In the pointing test, children chose the correct referents for words encountered in the coupled condition but not in the decoupled condition. In the looking time test, however, they looked to the targets regardless of condition. We explore the explanations for this and propose that the different response measures are reflecting two different kinds of learning.
The study of cognitive development hinges, largely, on the analysis of infant looking. But analyses of eye gaze data require the adoption of linking hypotheses: assumptions about the relationship between observed eye movements and underlying cognitive processes. We develop a general framework for constructing, testing, and comparing these hypotheses, and thus for producing new insights into early cognitive development. We first introduce the general framework – applicable to any infant gaze experiment – and then demonstrate its utility by analyzing data from a set of experiments investigating the role of attentional cues in infant learning. The new analysis uncovers significantly more structure in these data, finding evidence of learning that was not found in standard analyses and showing an unexpected relationship between cue use and learning rate. Finally, we discuss general implications for the construction and testing of quantitative linking hypotheses. MATLAB code for sample linking hypotheses can be found on the first author's website.
Williams and Bargh (2008) reported an experiment in which participants were simply asked to plot a single pair of points on a piece of graph paper, with the coordinates provided by the experimenter specifying a pair of points that lay at one of three different distances (close, intermediate, or far, relative to the range available on the graph paper). The participants who had graphed a more distant pair reported themselves as being significantly less close to members of their own family than did those who had plotted a more closely-situated pair. In another experiment, people's estimates of the caloric content of different foods were reportedly altered by the same type of spatial distance priming. Direct replications of both results were attempted, with precautions to ensure that the experimenter did not know what condition the participant was assigned to. The results showed no hint of the priming effects reported by Williams and Bargh (2008).
Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.
The perspective that behavior is often driven by unconscious determinants has become widespread in social psychology. Bargh, Chen, and Burrows' (1996) famous study, in which participants unwittingly exposed to the stereotype of age walked slower when exiting the laboratory, was instrumental in defining this perspective. Here, we present two experiments aimed at replicating the original study. Despite the use of automated timing methods and a larger sample, our first experiment failed to show priming. Our second experiment was aimed at manipulating the beliefs of the experimenters: Half were led to think that participants would walk slower when primed congruently, and the other half was led to expect the opposite. Strikingly, we obtained a walking speed effect, but only when experimenters believed participants would indeed walk slower. This suggests that both priming and experimenters' expectations are instrumental in explaining the walking speed effect. Further, debriefing was suggestive of awareness of the primes. We conclude that unconscious behavioral priming is real, while real, involves mechanisms different from those typically assumed to cause the effect.
Intentional forgetting refers to the surprising phenomenon that we can forget previously successfully encoded memories if we are instructed to do so. Here, we show that participants cannot only intentionally forget episodic memories but they can also mirror the “forgetting performance” of an observed model.
In four experiments a participant observed a model who took part in a memory experiment. In Experiment 1 and 2 observers saw a movie about the experiment, whereas in Experiment 3 and 4 the observers and the models took part together in a real laboratory experiment. The observed memory experiment was a directed forgetting experiment where the models learned two lists of items and were instructed either to forget or to remember the first list. In Experiment 1 and 3 observers were instructed to simply observe the experiment (“simple observation” instruction). In Experiment 2 and 4, observers received instructions aimed to induce the same learning goal for the observers and the models (“observation with goal-sharing” instruction). A directed forgetting effect (the reliably lower recall of to-be-forgotten items) emerged only when models received the “observation with goal-sharing” instruction (P<.001 in Experiment 2, and P<.05 in Experiment 4), and it was absent when observers received the “simple observation” instruction (P>.1 in Experiment 1 and 3).
If people observe another person with the same intention to learn, and see that this person is instructed to forget previously studied information, then they will produce the same intentional forgetting effect as the person they observed. This seems to be a an important aspect of human learning: if we can understand the goal of an observed person and this is in line with our behavioural goals then our learning performance will mirror the learning performance of the model.
Previous reports have described that neural activities in midbrain dopamine areas are sensitive to unexpected reward delivery and omission. These activities are correlated with reward prediction error in reinforcement learning models, the difference between predicted reward values and the obtained reward outcome. These findings suggest that the reward prediction error signal in the brain updates reward prediction through stimulus–reward experiences. It remains unknown, however, how sensory processing of reward-predicting stimuli contributes to the computation of reward prediction error. To elucidate this issue, we examined the relation between stimulus discriminability of the reward-predicting stimuli and the reward prediction error signal in the brain using functional magnetic resonance imaging (fMRI). Before main experiments, subjects learned an association between the orientation of a perceptually salient (high-contrast) Gabor patch and a juice reward. The subjects were then presented with lower-contrast Gabor patch stimuli to predict a reward. We calculated the correlation between fMRI signals and reward prediction error in two reinforcement learning models: a model including the modulation of reward prediction by stimulus discriminability and a model excluding this modulation. Results showed that fMRI signals in the midbrain are more highly correlated with reward prediction error in the model that includes stimulus discriminability than in the model that excludes stimulus discriminability. No regions showed higher correlation with the model that excludes stimulus discriminability. Moreover, results show that the difference in correlation between the two models was significant from the first session of the experiment, suggesting that the reward computation in the midbrain was modulated based on stimulus discriminability before learning a new contingency between perceptually ambiguous stimuli and a reward. These results suggest that the human reward system can incorporate the level of the stimulus discriminability flexibly into reward computations by modulating previously acquired reward values for a typical stimulus.
Accurate associative learning is often hindered by confirmation bias and success-chasing, which together can conspire to produce or solidify false beliefs in the decision-maker. We performed functional magnetic resonance imaging in 35 experienced physicians, while they learned to choose between two treatments in a series of virtual patient encounters. We estimated a learning model for each subject based on their observed behavior and this model divided clearly into high performers and low performers. The high performers showed small, but equal learning rates for both successes (positive outcomes) and failures (no response to the drug). In contrast, low performers showed very large and asymmetric learning rates, learning significantly more from successes than failures; a tendency that led to sub-optimal treatment choices. Consistently with these behavioral findings, high performers showed larger, more sustained BOLD responses to failed vs. successful outcomes in the dorsolateral prefrontal cortex and inferior parietal lobule while low performers displayed the opposite response profile. Furthermore, participants' learning asymmetry correlated with anticipatory activation in the nucleus accumbens at trial onset, well before outcome presentation. Subjects with anticipatory activation in the nucleus accumbens showed more success-chasing during learning. These results suggest that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. The differential brain activations between high and low performers could potentially be developed into biomarkers to identify efficient learners on novel decision tasks, in medical or other contexts.
Visual attention is captured by physically salient stimuli (termed salience-based attentional capture), and by otherwise task-irrelevant stimuli that contain goal-related features (termed contingent attentional capture). Recently, we reported that physically nonsalient stimuli associated with value through reward learning also capture attention involuntarily (Anderson, Laurent, & Yantis, PNAS, 2011). Although it is known that physical salience and goal-relatedness both influence attentional priority, it is unknown whether or how attentional capture by a salient stimulus is modulated by its associated value. Here we show that a physically salient, task-irrelevant distractor previously associated with a large reward slows visual search more than an equally salient distractor previously associated with a smaller reward. This magnification of salience-based attentional capture by learned value extinguishes over several hundred trials. These findings reveal a broad influence of learned value on involuntary attentional capture.
The time course of attention has often been investigated using a spatial cuing task. However, attention likely consists of multiple components, such as selectivity (resolving competition) and orienting (spatial shifting). Here we sought to investigate the time course of the selective aspect of attention, using a cuing task that did not require spatial shifting. In several experiments, targets were always presented at central fixation, and were preceded by a cue at different cue-target intervals. The selection component of attention was investigated by manipulating the presence of distractors. Regardless of the presence of distractors, an initial rapid performance enhancement was found that reached its maximum at around 100 ms post cue onset. Subsequently, when the target was the only item in the display, performance was sustained, but when the target was accompanied by irrelevant distractor items, performance declined. This temporal pattern matches closely with the transient attention response that has been found in spatial cuing studies, and shows that the selectivity aspect of attention is transient.
Much publicity has been given to the fact that people's economic decisions often deviate from the rational predictions of standard economic models. In the classic ultimatum game, for example, most people turn down financial gains by rejecting unequal monetary splits. The present study points to neglected individual differences in this debate. After participants played the ultimatum game we tested for individual differences in cognitive control capacity of the most and least economic responders. The key finding was that people who were higher in cognitive control, as measured by behavioral (Go/No-Go performance) and neural (No-Go N2 amplitude) markers, did tend to behave more in line with the standard models and showed increased acceptance of unequal splits. Hence, the cognitively highest scoring decision-makers were more likely to maximize their monetary payoffs and adhere to the standard economic predictions. Findings question popular claims with respect to the rejection of standard economic models and the irrationality of human economic decision-making.
Eye movement research has traditionally studied solely saccade and/or vergence eye movements by isolating these systems within a laboratory setting. While the neural correlates of saccadic eye movements are established, few studies have quantified the functional activity of vergence eye movements using fMRI. This study mapped the neural substrates of vergence eye movements and compared them to saccades to elucidate the spatial commonality and differentiation between these systems.
The stimulus was presented in a block design where the ‘off’ stimulus was a sustained fixation and the ‘on’ stimulus was random vergence or saccadic eye movements. Data were collected with a 3T scanner. A general linear model (GLM) was used in conjunction with cluster size to determine significantly active regions. A paired t-test of the GLM beta weight coefficients was computed between the saccade and vergence functional activities to test the hypothesis that vergence and saccadic stimulation would have spatial differentiation in addition to shared neural substrates.
Segregated functional activation was observed within the frontal eye fields where a portion of the functional activity from the vergence task was located anterior to the saccadic functional activity (z>2.3; p<0.03). An area within the midbrain was significantly correlated with the experimental design for the vergence but not the saccade data set. Similar functional activation was observed within the following regions of interest: the supplementary eye field, dorsolateral prefrontal cortex, ventral lateral prefrontal cortex, lateral intraparietal area, cuneus, precuneus, anterior and posterior cingulates, and cerebellar vermis. The functional activity from these regions was not different between the vergence and saccade data sets assessed by analyzing the beta weights of the paired t-test (p>0.2).
Functional MRI can elucidate the differences between the vergence and saccade neural substrates within the frontal eye fields and midbrain.
The sophisticated analysis of gestures and vocalizations, including assessment of their emotional valence, helps group-living primates efficiently navigate their social environment. Deficits in social information processing and emotion regulation are important components of many human psychiatric illnesses, such as autism, schizophrenia and social anxiety disorder. Analyzing the neurobiology of social information processing and emotion regulation requires a multidisciplinary approach that benefits from comparative studies of humans and animal models. However, many questions remain regarding the relationship between visual attention and arousal while processing social stimuli. Using noninvasive infrared eye-tracking methods, we measured the visual social attention and physiological arousal (pupil diameter) of adult male rhesus monkeys (Macaca mulatta) as they watched social and nonsocial videos. We found that social videos, as compared to nonsocial videos, captured more visual attention, especially if the social signals depicted in the videos were directed towards the subject. Subject-directed social cues and nonsocial nature documentary footage, compared to videos showing conspecifics engaging in naturalistic social interactions, generated larger pupil diameters (indicating heightened sympathetic arousal). These findings indicate that rhesus monkeys will actively engage in watching videos of various kinds. Moreover, infrared eye tracking technology provides a mechanism for sensitively gauging the social interest of presented stimuli. Adult male rhesus monkeys' visual attention and physiological arousal do not always trend in the same direction, and are likely influenced by the content and novelty of a particular visual stimulus. This experiment creates a strong foundation for future experiments that will examine the neural network responsible for social information processing in nonhuman primates. Such studies may provide valuable information relevant to interpreting the neural deficits underlying human psychiatric illnesses such as autism, schizophrenia and social anxiety disorder.
Hypnosis has had a long and controversial history in psychology, psychiatry and neurology, but the basic nature of hypnotic phenomena still remains unclear. Different theoretical approaches disagree as to whether or not hypnosis may involve an altered mental state. So far, a hypnotic state has never been convincingly demonstrated, if the criteria for the state are that it involves some objectively measurable and replicable behavioural or physiological phenomena that cannot be faked or simulated by non-hypnotized control subjects. We present a detailed case study of a highly hypnotizable subject who reliably shows a range of changes in both automatic and volitional eye movements when given a hypnotic induction. These changes correspond well with the phenomenon referred to as the “trance stare” in the hypnosis literature. Our results show that this ‘trance stare’ is associated with large and objective changes in the optokinetic reflex, the pupillary reflex and programming a saccade to a single target. Control subjects could not imitate these changes voluntarily. For the majority of people, hypnotic induction brings about states resembling normal focused attention or mental imagery. Our data nevertheless highlight that in some cases hypnosis may involve a special state, which qualitatively differs from the normal state of consciousness.
Cognitive and neuroscientific evidence has challenged the widespread view that perception, cognition and action constitute independent, discrete stages. For example, in continuous response trajectories toward a target response location, evidence suggests that a decision on which target to reach for (i.e., the cognition stage) is not reached before the movement starts (i.e., the action stage). As a result, instead of a straight trajectory to the correct target response, movement trajectories may curve toward competing responses or away from inhibited responses. In the present study, we examined response trajectories during a number comparison task. Participants had to decide whether a target number was smaller or larger than 5. They had to respond by moving to a left or a right response location. Replicating previous results, response trajectories were more curved toward the incorrect response location when distance to 5 was small (e.g., target number 4) than when distance to 5 was large (e.g., target number 1). Importantly, we manipulated the response mapping, which allowed us to demonstrate that this response trajectory effect results from the relative amount of evidence for the available responses across time. In this way, the present study stresses the tight coupling of number representations (i.e., cognition) and response related processes (i.e., action) and shows that these stages are not separable in time.
Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention.
Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of “warm” and “cool”. This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L- versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.
The limited capacity of visual working memory (VWM) requires us to select the task relevant information and filter out the irrelevant information efficiently. Previous studies showed that the individual differences in VWM capacity dramatically influenced the way we filtered out the distracters displayed in distinct spatial-locations: low-capacity individuals were poorer at filtering them out than the high-capacity ones. However, when the target and distracting information pertain to the same object (i.e., multiple-featured object), whether the VWM capacity modulates the feature-based filtering remains unknown.
We explored this issue mainly based on one of our recent studies, in which we asked the participants to remember three colors of colored-shapes or colored-landolt-Cs while using two types of task irrelevant information. We found that the irrelevant high-discriminable information could not be filtered out during the extraction of VWM but the irrelevant fine-grained information could be. We added 8 extra participants to the original 16 participants and then split the overall 24 participants into low- and high-VWM capacity groups. We found that regardless of the VWM capacity, the irrelevant high-discriminable information was selected into VWM, whereas the irrelevant fine-grained information was filtered out. The latter finding was further corroborated in a second experiment in which the participants were required to remember one colored-landolt-C and a more strict control was exerted over the VWM capacity.
We conclude that VWM capacity did not modulate the feature-based filtering in VWM.
Performance in most visual discrimination tasks is better along the horizontal than the vertical meridian (Horizontal-Vertical Anisotropy, HVA), and along the lower than the upper vertical meridian (Vertical Meridian Asymmetry, VMA), with intermediate performance at intercardinal locations. As these inhomogeneities are prevalent throughout visual tasks, it is important to understand the perceptual consequences of dissociating spatial reference frames. In all studies of performance fields so far, allocentric environmental references and egocentric observer reference frames were aligned. Here we quantified the effects of manipulating head-centric and retinotopic coordinates on the shape of visual performance fields. When observers viewed briefly presented radial arrays of Gabors and discriminated the tilt of a target relative to homogeneously oriented distractors, performance fields shifted with head tilt (Experiment 1), and fixation (Experiment 2). These results show that performance fields shift in-line with egocentric referents, corresponding to the retinal location of the stimulus.
The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result - the percept - depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a Bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven) factors as well as of top-down factors (induced by instruction manipulation) on both the perception process and the percept itself.
Parent-of-origin effects have been found to influence the mammalian brain and cognition and have been specifically implicated in the development of human social cognition and theory of mind. The experimental design in this study was developed to detect parent-of-origin effects on theory of mind, as measured by the ‘Reading the mind in the eyes’ (Eyes) task. Eyes scores were also entered into a principal components analysis with measures of empathy, social skills and executive function, in order to determine what aspect of theory of mind Eyes is measuring.
Maternal and paternal influences on Eyes scores were compared using correlations between pairs of full (70 pairs), maternal (25 pairs) and paternal siblings (15 pairs). Structural equation modelling supported a maternal influence on Eyes scores over the normal range but not low-scoring outliers, and also a sex-specific influence on males acting to decrease male Eyes scores. It was not possible to differentiate between genetic and environmental influences in this particular sample because maternal siblings tended to be raised together while paternal siblings were raised apart. The principal components analysis found Eyes was associated with measures of executive function, principally behavioural inhibition and attention, rather than empathy or social skills.
In conclusion, the results suggest a maternal influence on Eye scores in the normal range and a sex-specific influence acting to reduce scores in males. This influence may act via aspects of executive function such as behavioural inhibition and attention. There may be different influences acting to produce the lowest Eyes scores which implies that the heratibility and/or maternal influence on poor theory of mind skills may be qualitatively different to the influence on the normal range.
While the limbic system theory continues to be part of common scientific parlance, its validity has been questioned on multiple grounds. Nonetheless, the issue of whether or not there exists a set of brain areas preferentially dedicated to emotional processing remains central within affective neuroscience. Recently, a widespread neural reference space for emotion which includes limbic as well as other regions was characterized in a large meta-analysis. As methodologically heterogeneous studies go into such meta-analyses, showing in an individual study in which all parameters are kept constant, the involvement of overlapping areas for various emotion conditions in keeping with the neural reference space for emotion, would serve as valuable confirmatory evidence. Here, using fMRI, 20 young adult men were scanned while viewing validated neutral and effective emotion-eliciting short film excerpts shown to quickly and specifically elicit disgust, amusement, or sexual arousal. Each emotion-specific run included, in random order, multiple neutral and emotion condition blocks. A stringent conjunction analysis revealed a large overlap across emotion conditions that fit remarkably well with the neural reference space for emotion. This overlap included symmetrical bilateral activation of the medial prefrontal cortex, the anterior cingulate, the temporo-occipital junction, the basal ganglia, the brainstem, the amygdala, the hippocampus, the thalamus, the subthalamic nucleus, the posterior hypothalamus, the cerebellum, as well as the frontal operculum extending towards the anterior insula. This study clearly confirms for the visual modality, that processing emotional stimuli leads to widespread increases in activation that cluster within relatively confined areas, regardless of valence.
Preparatory activity based on a priori probabilities generated in previous trials and subjective expectancies would produce an attentional bias. However, preparation can be correct (valid) or incorrect (invalid) depending on the actual target stimulus. The alternation effect refers to the subjective expectancy that a target will not be repeated in the same position, causing RTs to increase if the target location is repeated. The present experiment, using the Posner's central cue paradigm, tries to demonstrate that not only the credibility of the cue, but also the expectancy about the next position of the target are changedin a trial by trial basis. Sequences of trials were analyzed.
The results indicated an increase in RT benefits when sequences of two and three valid trials occurred. The analysis of errors indicated an increase in anticipatory behavior which grows as the number of valid trials is increased. On the other hand, there was also an RT benefit when a trial was preceded by trials in which the position of the target changed with respect to the current trial (alternation effect). Sequences of two alternations or two repetitions were faster than sequences of trials in which a pattern of repetition or alternation is broken.
Taken together, these results suggest that in Posner's central cue paradigm, and with regard to the anticipatory activity, the credibility of the external cue and of the endogenously anticipated patterns of target location are constantly updated. The results suggest that Bayesian rules are operating in the generation of anticipatory activity as a function of the previous trial's outcome, but also on biases or prior beliefs like the “gambler fallacy”.
When deciding whether to bet in situations that involve potential monetary loss or gain (mixed gambles), a subjective sense of pressure can influence the evaluation of the expected utility associated with each choice option. Here, we explored how gambling decisions, their psychophysiological and neural counterparts are modulated by an induced sense of urgency to respond. Urgency influenced decision times and evoked heart rate responses, interacting with the expected value of each gamble. Using functional MRI, we observed that this interaction was associated with changes in the activity of the striatum, a critical region for both reward and choice selection, and within the insula, a region implicated as the substrate of affective feelings arising from interoceptive signals which influence motivational behavior. Our findings bridge current psychophysiological and neurobiological models of value representation and action-programming, identifying the striatum and insular cortex as the key substrates of decision-making under risk and urgency.