Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects.
Neuroscience has made considerable progress in understanding the neural substrates supporting cognitive performance in a number of domains, including memory, perception and decision-making. In contrast, how the human brain generates metacognitive awareness of task performance remains unclear. Here, we address this question by asking participants to perform perceptual decisions while providing concurrent metacognitive reports, during fMRI scanning. We show that activity in right rostrolateral prefrontal cortex (rlPFC) satisfies three constraints for a role in metacognitive aspects of decision-making. Right rlPFC showed greater activity during self-report compared to a matched control condition; activity in this region correlated with reported confidence; and the strength of the relationship between activity and confidence predicted metacognitive ability across individuals. In addition, functional connectivity between right rlPFC and both contralateral PFC and visual cortex increased during metacognitive reports. We discuss these findings in a theoretical framework where rlPFC re-represents object-level decision uncertainty to facilitate metacognitive report.
Investigations of the underlying mechanisms of choice in humans have focused on learning from prediction errors, and so the computational structure of value based planning is comparatively underexplored. Using behavioural and neuroimaging analyses of a minimax decision task, we show that the computational processes underlying forward planning are expressed in the anterior caudate nucleus as values of individual branching steps in a decision tree. In contrast, values represented in the putamen pertain solely to values learnt during extensive training. During actual choice, both striatal areas show a functional coupling to ventromedial prefrontal cortex, consistent with this region acting as a value comparator. Our findings point towards an architecture of choice in which segregated value systems operate in parallel in the striatum for planning and extensively trained choices, with medial prefrontal cortex integrating their outputs.
Bradykinesia is a cardinal feature of Parkinson’s disease (PD). Despite its disabling impact, the precise cause of this symptom remains elusive. Recent thinking suggests that bradykinesia may be more than simply a manifestation of motor slowness, and may in part reflect a specific deficit in the operation of motivational vigour in the striatum. In this paper we test the hypothesis that movement time in PD can be modulated by the specific nature of the motivational salience of possible action-outcomes.
We developed a novel movement time paradigm involving winnable rewards and avoidable painful electrical stimuli. The faster the subjects performed an action the more likely they were to win money (in appetitive blocks) or to avoid a painful shock (in aversive blocks). We compared PD patients when OFF dopaminergic medication with controls. Our key finding is that PD patients OFF dopaminergic medication move faster to avoid aversive outcomes (painful electric shocks) than to reap rewarding outcomes (winning money) and, unlike controls, do not speed up in the current trial having failed to win money in the previous one. We also demonstrate that sensitivity to distracting stimuli is valence specific.
We suggest this pattern of results can be explained in terms of low dopamine levels in the Parkinsonian state leading to an insensitivity to appetitive outcomes, and thus an inability to modulate movement speed in the face of rewards. By comparison, sensitivity to aversive stimuli is relatively spared. Our findings point to a rarely described property of bradykinesia in PD, namely its selective regulation by everyday outcomes.
Recent functional imaging studies link reward-related activation of the midbrain substantia nigra - ventral tegmental area (SN/VTA), the site of origin of ascending dopaminergic projections, with improved long-term episodic memory. Here, we investigated in two behavioural experiments how (i) the contingency between item properties and reward, (ii) the magnitude of reward, (iii) the uncertainty of outcomes, and (iv) the contextual availability of reward affect long-term memory. We show that episodic memory is enhanced only when rewards are specifically predicted by the semantic identity of the stimuli and changes non-linearly with increasing reward magnitude. These effects are specific to reward and do not occur in relation to outcome uncertainty alone. These behavioural specifications are relevant for the functional interpretation of how reward-related activation of the SN/VTA, and more generally dopaminergic neuromodulation, contribute to long-term memory.
Reward; episodic memory
Adaptive success in social animals depends on an ability to infer the likely actions of others. Little is known about the neural computations that underlie this capacity. Here, we show that the brain models the values and choices of others even when these values are currently irrelevant. These modeled choices use the same computations that underlie our own choices, but are resolved in a distinct neighboring medial prefrontal brain region. Crucially, however, when subjects choose on behalf of a partner instead of themselves, these regions exchange their functional roles. Hence, regions that represented values of the subject’s executed choices now represent the values of choices executed on behalf of the partner, and those that previously modeled the partner now model the subject. These data tie together neural computations underlying self-referential and social inference, and in so doing establish a new functional axis characterizing the medial wall of prefrontal cortex.
► Valuation and choice for self and other exhibit parallel computations in PFC ► vmPFC computes choices that will be executed, whether for self or other ► Rostral dmPFC computes choices that are modeled, whether for self or other ► A similar gradient is present in temporoparietal cortex
Nicolle et al. show that valuation and choice for self and other exhibit parallel computations, where gradients exist within both medial prefrontal and temporoparietal cortices. Ventral regions compute choices that will be executed, while dorsal regions compute choices that are merely modeled.
This study assessed the impact of serotonin transporter genotype (5-HTTLPR) on regional responses to emotional faces in the amygdala and subgenual cingulate cortex (sgACC), while subjects performed a gender discrimination task. Although we found no evidence for greater amygdala reactivity or reduced amygdala–sgACC coupling in short variant 5-HTTLPR homozygotes (s/s), we observed an interaction between genotype and emotion in sgACC. Only long variant homozygotes (la/la) exhibited subgenual deactivation to fearful versus neutral faces, whereas the effect in s/s subjects was in the other direction. This absence of subgenual deactivation in s/s subjects parallels a recent finding in depressed subjects [Grimm, S., Boesiger, P., Beck, J., Schuepbach, D., Bermpohl, F., Walter, M., et al. Altered negative BOLD responses in the default-mode network during emotion processing in depressed subjects. Neuropsychopharmacology, 34, 932–943, 2009]. Taken together, the findings suggest that subgenual cingulate activity may play an important role in regulating the impact of aversive stimuli, potentially conferring greater resilience to the effects of aversive stimuli in la/la subjects. Using dynamic causal modeling of functional magnetic resonance imaging data, we explored the effects of genotype on effective connectivity and emotion-specific changes in coupling across a network of regions implicated in social processing. Viewing fearful faces enhanced bidirectional excitatory coupling between the amygdala and the fusiform gyrus, and increased the inhibitory influence of the amygdala over the sgACC, although this modulation of coupling did not differ between the genotype groups. The findings are discussed in relation to the role of sgACC and serotonin in moderating responses to aversive stimuli [Dayan, P., & Huys, Q. J., Serotonin, inhibition, and negative mood. PLoS Comput Biol, 4, e4, 2008; Mayberg, H. S., Liotti, M., Brannan, S. K., McGinnis, S., Mahurin, R. K., Jerabek, P. A., et al. Reciprocal limbic–cortical function and negative mood: Converging PET findings in depression and normal sadness. Am J Psychiatry, 156, 675–682, 1999].
Humans bargaining over money tend to reject unfair offers, whilst chimpanzees bargaining over primary rewards of food do not show this same motivation to reject. Whether such reciprocal fairness represents a predominantly human motivation has generated considerable recent interest. We induced either moderate or severe thirst in humans using intravenous saline, and examined responses to unfairness in an Ultimatum Game with water. We ask if humans also reject unfair offers for primary rewards. Despite the induction of even severe thirst, our subjects rejected unfair offers. Further, our data provide tentative evidence that this fairness motivation was traded-off against the value of the primary reward to the individual, a trade-off determined by the subjective value of water rather than by an objective physiological metric of value. Our data demonstrate humans care about fairness during bargaining with primary rewards, but that subjective self-interest may limit this fairness motivation.
When predicting financial profits , relationship outcomes , longevity , or professional success , people habitually underestimate the likelihood of future negative events (for review see ). This well-known bias, termed unrealistic optimism , is observed across age , culture , and species  and has a significant societal impact on domains ranging from financial markets to health and well being. However, it is unknown how neuromodulatory systems impact on the generation of optimistically biased beliefs. This question assumes great importance in light of evidence that common neuropsychiatric disorders, such as depression, are characterized by pessimism [10, 11]. Here, we show that administration of a drug that enhances dopaminergic function (dihydroxy-L-phenylalanine; L-DOPA) increases an optimism bias. This effect is due to L-DOPA impairing the ability to update belief in response to undesirable information about the future. These findings provide the first evidence that the neuromodulator dopamine impacts on belief formation by reducing negative expectations regarding the future.
► L-DOPA impairs ability to update beliefs in response to undesirable information ► Enhancing dopamine function reduces negative expectations regarding the future
Decision making is often considered to arise out of contributions from a model-free habitual system and a model-based goal-directed system. Here, we investigated the effect of a dopamine manipulation on the degree to which either system contributes to instrumental behavior in a two-stage Markov decision task, which has been shown to discriminate model-free from model-based control. We found increased dopamine levels promote model-based over model-free choice.
► Dopamine increases relative degree of model-based to model-free behavior
Decision making arises out of contributions from model-free habitual and model-based goal-directed systems. Wunderlich et al. investigate how dopamine affects each system to contribute to instrumental behavior, finding that increased dopamine levels preferentially promote model-based goal-directed choice.
Decision-making invokes two fundamental axes of control: affect or valence, spanning reward and punishment, and effect or action, spanning invigoration and inhibition. We studied the acquisition of instrumental responding in healthy human volunteers in a task in which we orthogonalized action requirements and outcome valence. Subjects were much more successful in learning active choices in rewarded conditions, and passive choices in punished conditions. Using computational reinforcement-learning models, we teased apart contributions from putatively instrumental and Pavlovian components in the generation of the observed asymmetry during learning. Moreover, using model-based fMRI, we showed that BOLD signals in striatum and substantia nigra/ventral tegmental area (SN/VTA) correlated with instrumentally learnt action values, but with opposite signs for go and no-go choices. Finally, we showed that successful instrumental learning depends on engagement of bilateral inferior frontal gyrus. Our behavioral and computational data showed that instrumental learning is contingent on overcoming inherent and plastic Pavlovian biases, while our neuronal data showed this learning is linked to unique patterns of brain activity in regions implicated in action and inhibition respectively.
► Expectation of valence interferes with action learning in human participants. ► Computational modeling disentangles influences of instrumental and Pavlovian systems. ► Striatum and SN/VTA track action values and bind them to the control of vigor. ► Successful control is associated with activity in the inferior prefrontal cortex.
Action; Learning; Pavlovian; Instrumental; Striatum
Perceptual learning operates on distinct timescales. How different neuromodulatory systems impact on learning across these different timescales is poorly understood.
Here, we test the causal impact of a novel influence on perceptual learning, the androgen hormone testosterone, across distinct timescales.
In a double-blind, placebo- controlled, cross-over study with testosterone, subjects undertook a simple contrast detection task during training sessions on two separate days.
On placebo, there was no learning either within training sessions or between days, except for a fast, rapidly saturating, improvement early on each testing day. However, testosterone caused “off-line” learning, with no learning seen within training sessions, but a marked performance improvement over the days between sessions. This testosterone-induced learning occurred in the absence of changes in subjective confidence or introspective accuracy.
Our findings show that testosterone influences perceptual learning on a timescale consistent with an influence on “off-line” consolidation processes.
Electronic supplementary material
The online version of this article (doi:10.1007/s00213-012-2769-y) contains supplementary material, which is available to authorized users.
Testosterone; Learning; Visual; Timescale
Accumulating evidence suggests a role for the medial temporal lobe (MTL) in working memory (WM). However, little is known concerning its functional interactions with other cortical regions in the distributed neural network subserving WM. To reveal these, we availed of subjects with MTL damage and characterized changes in effective connectivity while subjects engaged in WM task. Specifically, we compared dynamic causal models, extracted from magnetoencephalographic recordings during verbal WM encoding, in temporal lobe epilepsy patients (with left hippocampal sclerosis) and controls. Bayesian model comparison indicated that the best model (across subjects) evidenced bilateral, forward, and backward connections, coupling inferior temporal cortex (ITC), inferior frontal cortex (IFC), and MTL. MTL damage weakened backward connections from left MTL to left ITC, a decrease accompanied by strengthening of (bidirectional) connections between IFC and MTL in the contralesional hemisphere. These findings provide novel evidence concerning functional interactions between nodes of this fundamental cognitive network and sheds light on how these interactions are modified as a result of focal damage to MTL. The findings highlight that a reduced (top-down) influence of the MTL on ipsilateral language regions is accompanied by enhanced reciprocal coupling in the undamaged hemisphere providing a first demonstration of “connectional diaschisis.”
dynamic causal modeling; effective connectivity; magnetoencephalography; temporal lobe epilepsy; working memory
Ability in various cognitive domains is often assessed by measuring task performance, such as the accuracy of a perceptual categorization. A similar analysis can be applied to metacognitive reports about a task to quantify the degree to which an individual is aware of his or her success or failure. Here, we review the psychological and neural underpinnings of metacognitive accuracy, drawing on research in memory and decision-making. These data show that metacognitive accuracy is dissociable from task performance and varies across individuals. Convergent evidence indicates that the function of the rostral and dorsal aspect of the lateral prefrontal cortex (PFC) is important for the accuracy of retrospective judgements of performance. In contrast, prospective judgements of performance may depend upon medial PFC. We close with a discussion of how metacognitive processes relate to concepts of cognitive control, and propose a neural synthesis in which dorsolateral and anterior prefrontal cortical subregions interact with interoceptive cortices (cingulate and insula) to promote accurate judgements of performance.
metacognition; confidence; conflict; prefrontal cortex; functional magnetic resonance imaging; individual differences
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape.
metacognition; neurobiology; computational modelling; consciousness
Impulse control disorders are common in Parkinson's; disease, occurring in 13.6% of patients. Using a pharmacological manipulation and a novel risk taking task while performing functional magnetic resonance imaging, we investigated the relationship between dopamine agonists and risk taking in patients with Parkinson's; disease with and without impulse control disorders. During functional magnetic resonance imaging, subjects chose between two choices of equal expected value: a ‘Sure’ choice and a ‘Gamble’ choice of moderate risk. To commence each trial, in the ‘Gain’ condition, individuals started at $0 and in the ‘Loss’ condition individuals started at −$50 below the ‘Sure’ amount. The difference between the maximum and minimum outcomes from each gamble (i.e. range) was used as an index of risk (‘Gamble Risk’). Sixteen healthy volunteers were behaviourally tested. Fourteen impulse control disorder (problem gambling or compulsive shopping) and 14 matched Parkinson's; disease controls were tested ON and OFF dopamine agonists. Patients with impulse control disorder made more risky choices in the ‘Gain’ relative to the ‘Loss’ condition along with decreased orbitofrontal cortex and anterior cingulate activity, with the opposite observed in Parkinson's; disease controls. In patients with impulse control disorder, dopamine agonists were associated with enhanced sensitivity to risk along with decreased ventral striatal activity again with the opposite in Parkinson's; disease controls. Patients with impulse control disorder appear to have a bias towards risky choices independent of the effect of loss aversion. Dopamine agonists enhance sensitivity to risk in patients with impulse control disorder possibly by impairing risk evaluation in the striatum. Our results provide a potential explanation of why dopamine agonists may lead to an unconscious bias towards risk in susceptible individuals.
Parkinson's; disease; dopamine; gambling; decision making; risk
Unrealistic optimism is a pervasive human trait influencing domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. Here, we provide an explanation. Specifically, we show a striking asymmetry, whereby people updated their beliefs more in response to information that was better than expected compared to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in highly optimistic and low optimistic individuals. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update within right inferior prefrontal gyrus. These findings show that optimism is tied to a selective update failure, and diminished neural coding, of undesirable information regarding the future.
The role dopamine plays in decision-making has important theoretical, empirical and clinical implications. Here, we examined its precise contribution by exploiting the lesion deficit model afforded by Parkinson’s disease. We studied patients in a two-stage reinforcement learning task, while they were ON and OFF dopamine replacement medication. Contrary to expectation, we found that dopaminergic drug state (ON or OFF) did not impact learning. Instead, the critical factor was drug state during the performance phase, with patients ON medication choosing correctly significantly more frequently than those OFF medication. This effect was independent of drug state during initial learning and appears to reflect a facilitation of generalization for learnt information. This inference is bolstered by our observation that neural activity in nucleus accumbens and ventromedial prefrontal cortex, measured during simultaneously acquired functional magnetic resonance imaging, represented learnt stimulus values during performance. This effect was expressed solely during the ON state with activity in these regions correlating with better performance. Our data indicate that dopamine modulation of nucleus accumbens and ventromedial prefrontal cortex exerts a specific effect on choice behaviour distinct from pure learning. The findings are in keeping with the substantial other evidence that certain aspects of learning are unaffected by dopamine lesions or depletion, and that dopamine plays a key role in performance that may be distinct from its role in learning.
Parkinson’s disease; learning; functional MRI; dopamine
The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making.
Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations [1–6]. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing [7–9]. The cholinergic system is linked to attentional function in vivo [10–13], whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations [14–16]. This has led to theoretical proposals [17–19] that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex [20, 21].
► Cholinergic agonist enhances human performance in a visuospatial attention task ► Occipital alpha/beta but not gamma oscillations under cholinergic control ► Alpha/beta attentional enhancement by cholinergic agonist relates to behavior ► Dichotomy of anatomical feedback versus feedforward connections may explain results
Human memory is strikingly susceptible to social influences, yet we know little about the underlying mechanisms. We examined how socially induced memory errors are generated in the brain by studying the memory of individuals exposed to recollections of others. Participants exhibited a strong tendency to conform to erroneous recollections of the group, producing both long-lasting and temporary errors, even when their initial memory was strong and accurate. Functional brain imaging revealed that social influence modified the neuronal representation of memory. Specifically, a particular brain signature of enhanced amygdala activity and enhanced amygdala-hippocampus connectivity predicted long-lasting, but not temporary memory alterations. Our findings reveal how social manipulation can alter memory and extend the known functions of the amygdala to encompass socially-mediated memory distortions.
Some people conform more than others. Across different contexts, this tendency is a fairly stable trait . This stability suggests that the tendency to conform might have an anatomical correlate . Values that one associates with available options, from foods to political candidates, help to guide choices and behaviour. These values can often be updated by the expressed preferences of other people as much as by independent experience. In this correspondence, we report a linear relationship between grey matter volume (GM) in a region of lateral orbitofrontal cortex (lOFCGM) and the tendency to shift reported desire for objects toward values expressed by other people. This effect was found in precisely the same region in each brain hemisphere. lOFCGM also predicted the functional hemodynamic response in the middle frontal gyrus to discovering that someone else's values contrast with one's own. These findings indicate that the tendency to conform one's values to those expressed by other people has an anatomical correlate in the human brain.
Collaboration can provide benefits to the individual and the group across a variety of contexts. Even in simple perceptual tasks, the aggregation of individuals' personal information can enable enhanced group decision-making. However, in certain circumstances such collaboration can worsen performance, or even expose an individual to exploitation in economic tasks, and therefore a balance needs to be struck between a collaborative and a more egocentric disposition. Neurohumoral agents such as oxytocin are known to promote collaborative behaviours in economic tasks, but whether there are opponent agents, and whether these might even affect information aggregation without an economic component, is unknown. Here, we show that an androgen hormone, testosterone, acts as such an agent. Testosterone causally disrupted collaborative decision-making in a perceptual decision task, markedly reducing performance benefit individuals accrued from collaboration while leaving individual decision-making ability unaffected. This effect emerged because testosterone engendered more egocentric choices, manifest in an overweighting of one's own relative to others' judgements during joint decision-making. Our findings show that the biological control of social behaviour is dynamically regulated not only by modulators promoting, but also by those diminishing a propensity to collaborate.
collaboration; testosterone; information aggregation; social
The amygdala plays a central role in evaluating the behavioral importance of sensory information. Anatomical subcortical pathways provide direct input to the amygdala from early sensory systems and may support an adaptively valuable rapid appraisal of salient information [1–3]. However, the functional significance of these subcortical inputs remains controversial . We recorded magnetoencephalographic activity evoked by tones in the context of emotionally valent faces and tested two competing biologically motivated dynamic causal models [5, 6] against these data: the dual and cortical models. The dual model comprised two parallel (cortical and subcortical) routes to the amygdala, whereas the cortical model excluded the subcortical path. We found that neuronal responses elicited by salient information were better explained when a subcortical pathway was included. In keeping with its putative functional role of rapid stimulus appraisal, the subcortical pathway was most important early in stimulus processing. However, as often assumed, its action was not limited to the context of fear, pointing to a more widespread information processing role. Thus, our data supports the idea that an expedited evaluation of sensory input is best explained by an architecture that involves a subcortical path to the amygdala.
► Salient information processing engages cortical and subcortical routes to amygdala ► The subcortical pathway is important in early processing stages ► Expedited subcortical processing is not limited to fear related stimuli
Real-world decision-making often involves social considerations. Consequently, the social value of stimuli can induce preferences in choice behavior. However, it is unknown how financial and social values are integrated in the brain. Here, we investigated how smiling and angry face stimuli interacted with financial reward feedback in a stochastically-rewarded decision-making task. Subjects reliably preferred the smiling faces despite equivalent reward feedback, demonstrating a socially driven bias. We fit a Bayesian reinforcement learning model to factor the effects of financial rewards and emotion preferences in individual subjects, and regressed model predictions on the trial-by-trial fMRI signal. Activity in the sub-callosal cingulate and the ventral striatum, both involved in reward learning, correlated with financial reward feedback, whereas the differential contribution of social value activated dorsal temporo-parietal junction and dorsal anterior cingulate cortex, previously proposed as components of a mentalizing network. We conclude that the impact of social stimuli on value-based decision processes is mediated by effects in brain regions partially separable from classical reward circuitry.