Fluid intelligence represents the capacity for flexible problem solving and rapid behavioral adaptation. Rewards drive flexible behavioral adaptation, in part via a teaching signal expressed as reward prediction errors in the ventral striatum, which has been associated with phasic dopamine release in animal studies. We examined a sample of 28 healthy male adults using multimodal imaging and biological parametric mapping with 1) functional magnetic resonance imaging during a reversal learning task and 2) in a subsample of 17 subjects also with positron emission tomography using 6-[18F]fluoro-L-DOPA to assess dopamine synthesis capacity. Fluid intelligence was measured using a battery of nine standard neuropsychological tests. Ventral striatal BOLD correlates of reward prediction errors were positively correlated with fluid intelligence and, in the right ventral striatum, also inversely correlated with dopamine synthesis capacity (FDOPA Kinapp). When exploring aspects of fluid intelligence, we observed that prediction error signaling correlates with complex attention and reasoning. These findings indicate that individual differences in the capacity for flexible problem solving may be driven by ventral striatal activation during reward-related learning, which in turn proved to be inversely associated with ventral striatal dopamine synthesis capacity.
prediction error; dopamine synthesis; fluid intelligence; ventral striatum; fMRI; PET
•We improve predictive validity of a general linear convolution method to analyse evoked SCR.•A constrained individual response function provides highest predictive validity.•This IRF is realised by a canonical SCRF together with its time derivative.•A high pass filter of 0.05 Hz cut-off frequency is optimal for analysis.•Non-linear models better reconstruct the observed time-series but have lower predictive validity.
Model-based analysis of psychophysiological signals is more robust to noise – compared to standard approaches – and may furnish better predictors of psychological state, given a physiological signal. We have previously established the improved predictive validity of model-based analysis of evoked skin conductance responses to brief stimuli, relative to standard approaches. Here, we consider some technical aspects of the underlying generative model and demonstrate further improvements. Most importantly, harvesting between-subject variability in response shape can improve predictive validity, but only under constraints on plausible response forms. A further improvement is achieved by conditioning the physiological signal with high pass filtering. A general conclusion is that precise modelling of physiological time series does not markedly increase predictive validity; instead, it appears that a more constrained model and optimised data features provide better results, probably through a suppression of physiological fluctuation that is not caused by the experiment.
Skin conductance responses (SCR); Galvanic skin response (GSR); Electrodermal activity (EDA); General linear convolution model (GLM); Generative model; Model inversion
Standard theories of decision-making involving delayed outcomes predict that people should defer a punishment, whilst advancing a reward. In some cases, such as pain, people seem to prefer to expedite punishment, implying that its anticipation carries a cost, often conceptualized as ‘dread’. Despite empirical support for the existence of dread, whether and how it depends on prospective delay is unknown. Furthermore, it is unclear whether dread represents a stable component of value, or is modulated by biases such as framing effects. Here, we examine choices made between different numbers of painful shocks to be delivered faithfully at different time points up to 15 minutes in the future, as well as choices between hypothetical painful dental appointments at time points of up to approximately eight months in the future, to test alternative models for how future pain is disvalued. We show that future pain initially becomes increasingly aversive with increasing delay, but does so at a decreasing rate. This is consistent with a value model in which moment-by-moment dread increases up to the time of expected pain, such that dread becomes equivalent to the discounted expectation of pain. For a minority of individuals pain has maximum negative value at intermediate delay, suggesting that the dread function may itself be prospectively discounted in time. Framing an outcome as relief reduces the overall preference to expedite pain, which can be parameterized by reducing the rate of the dread-discounting function. Our data support an account of disvaluation for primary punishments such as pain, which differs fundamentally from existing models applied to financial punishments, in which dread exerts a powerful but time-dependent influence over choice.
People often prefer to ‘get pain out of the way’, treating pain in the future as more significant than pain now. One explanation, termed ‘dread’, is that anticipating pain is unpleasant or disadvantageous, rather like pain itself. Human brain imaging studies support the existence of dread, though it is unknown whether and how dread depends on the timing of future pain. We address this question by offering people decisions between moderately painful stimuli, and separately between imagined painful dental appointments occurring at different time points in the future, and use their choices to estimate dread. We show that future pain initially becomes more unpleasant when it is delayed, but as pain is moved further into the future, the effect of delay decreases. This is consistent with dread increasing as anticipated pain draws nearer, which is then combined with a general (and opposing) tendency to down-weight the significance of future events. We also show that dread can be attenuated by describing pain in terms of relief from an imagined even more severe pain. These observations reveal important principles about how people estimate the value of anticipated pain – relevant to a diverse range of human emotion and behavior.
Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. In a human functional magnetic resonance imaging (fMRI) study, we show that healthy older adults have an abnormal signature of expected value resulting in an incomplete reward prediction error signal in the nucleus accumbens, a brain region receiving rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum measured with diffusion tensor imaging (DTI) was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to a level shown by young adults. Critically this drug-effect was linked to restoration of a canonical neural reward prediction error. Thus we identify a neurochemical signature underlying abnormal reward processing in older adults and show this can be modulated by L-DOPA.
Neural encoding of value-based stimuli is suggested to involve representations of summary statistics, including risk and expected value (EV). A more complex, but ecologically more common, context is when multiple risky options are evaluated together. However, it is unknown whether encoding related to option evaluation in these situations involves similar principles. Here we employed fMRI during a task that parametrically manipulated EV and risk in two simultaneously presented lotteries, both of which contained either gains or losses. We found representations of EV in medial prefrontal cortex and anterior insula, an encoding that was dependent on which option was chosen (i.e. chosen and unchosen EV) and whether the choice was over gains or losses. Parietal activity reflected whether the riskier or surer option was selected, whilst activity in a network of regions that also included parietal cortex reflected both combined risk and difference in risk for the two options. Our findings provide support for the idea that summary statistics underpin a representation of value-based stimuli, and further that these summary statistics undergo distinct forms of encoding.
•We examine choice between multiple risky options.•fMRI revealed encoding of value-based stimuli as expected value (EV) and risk.•Risk and EV underwent distinct types of encoding in distinct brain regions.•Neural and RT data were also consistent with model-free influences on choice.
Risk; Loss; fMRI; Approach–avoidance
Substantia nigra/ventral tegmental area (SN/VTA) subregions, defined by dopaminergic projections to the striatum, are differentially affected by health (e.g. normal aging) and disease (e.g. Parkinson's disease). This may have an impact on reward processing which relies on dopaminergic regions and circuits. We acquired diffusion tensor imaging (DTI) with probabilistic tractography in 30 healthy older adults to determine whether subregions of the SN/VTA could be delineated based on anatomical connectivity to the striatum. We found that a dorsomedial region of the SN/VTA preferentially connected to the ventral striatum whereas a more ventrolateral region connected to the dorsal striatum. These SN/VTA subregions could be characterised by differences in quantitative structural imaging parameters, suggesting different underlying tissue properties. We also observed that these connectivity patterns differentially mapped onto reward dependence personality trait. We show that tractography can be used to parcellate the SN/VTA into anatomically plausible and behaviourally meaningful compartments, an approach that may help future studies to provide a more fine-grained synopsis of pathological changes in the dopaminergic midbrain and their functional impact.
•We use DTI to segment the substantia nigra/ventral tegmental area (SN/VTA).•Dorsomedial and ventrolateral SN/VTA regions were defined by striatal connectivity.•R2* and fractional anisotropy values differed between SN/VTA subregions.•Connectivity patterns differentially mapped onto a reward personality trait.
Connectivity; Diffusion-weighted imaging; Reward; Segmentation; Substantia nigra
The idea that decisions alter preferences has had a considerable influence on the field of psychology and underpins cognitive dissonance theory. Yet it is unknown whether choice-induced changes in preferences are long lasting or are transient manifestations seen in the immediate aftermath of decisions. In the research reported here, we investigated whether these changes in preferences are fleeting or stable. Participants rated vacation destinations immediately after making hypothetical choices between destinations and 2.5 to 3 years later. We found that choices altered preferences both immediately and after the delay. These changes could not be accounted for by participants’ preexisting preferences, and they occurred only when participants made the choices themselves. Our findings provide evidence that making a decision can lead to enduring change in preferences.
decision making; cognitive dissonance; preferences; social cognition
This paper reviews recent developments under the free energy principle that introduce a normative perspective on classical economic (utilitarian) decision-making based on (active) Bayesian inference. It has been suggested that the free energy principle precludes novelty and complexity, because it assumes that biological systems—like ourselves—try to minimize the long-term average of surprise to maintain their homeostasis. However, recent formulations show that minimizing surprise leads naturally to concepts such as exploration and novelty bonuses. In this approach, agents infer a policy that minimizes surprise by minimizing the difference (or relative entropy) between likely and desired outcomes, which involves both pursuing the goal-state that has the highest expected utility (often termed “exploitation”) and visiting a number of different goal-states (“exploration”). Crucially, the opportunity to visit new states increases the value of the current state. Casting decision-making problems within a variational framework, therefore, predicts that our behavior is governed by both the entropy and expected utility of future states. This dissolves any dialectic between minimizing surprise and exploration or novelty seeking.
active inference; exploration; exploitation; novelty; reinforcement learning; free energy
Flexible instrumental learning is required to harness the appropriate behaviors to obtain rewards and to avoid punishments. The precise contribution of dopaminergic midbrain regions (substantia nigra/ventral tegmental area [SN/VTA]) to this form of behavioral adaptation remains unclear. Normal aging is associated with a variable loss of dopamine neurons in the SN/VTA. We therefore tested the relationship between flexible instrumental learning and midbrain structural integrity. We compared task performance on a probabilistic monetary go/no-go task, involving trial and error learning of: “go to win,” “no-go to win,” “go to avoid losing,” and “no-go to avoid losing” in 42 healthy older adults to previous behavioral data from 47 younger adults. Quantitative structural magnetization transfer images were obtained to index regional structural integrity. On average, both some younger and some older participants demonstrated a behavioral asymmetry whereby they were better at learning to act for reward (“go to win” > “no-go to win”), but better at learning not to act to avoid punishment (“no-go to avoid losing” > “go to avoid losing”). Older, but not younger, participants with greater structural integrity of the SN/VTA and the adjacent subthalamic nucleus could overcome this asymmetry. We show that interindividual variability among healthy older adults of the structural integrity within the SN/VTA and subthalamic nucleus relates to effective acquisition of competing instrumental responses.
Aging; Instrumental learning; Magnetization transfer; Novelty seeking; Substantia nigra
Predictions about sensory input exert a dominant effect on what we perceive, and this is particularly true for the experience of pain. However, it remains unclear what component of prediction, from an information-theoretic perspective, controls this effect. We used a vicarious pain observation paradigm to study how the underlying statistics of predictive information modulate experience. Subjects observed judgments that a group of people made to a painful thermal stimulus, before receiving the same stimulus themselves. We show that the mean observed rating exerted a strong assimilative effect on subjective pain. In addition, we show that observed uncertainty had a specific and potent hyperalgesic effect. Using computational functional magnetic resonance imaging, we found that this effect correlated with activity in the periaqueductal grey. Our results provide evidence for a novel form of cognitive hyperalgesia relating to perceptual uncertainty, induced here by vicarious observation, with control mediated by the brainstem pain modulatory system.
Previous studies have shown that appetitive motivation enhances episodic memory formation via a network including the substantia nigra/ventral tegmental area (SN/VTA), striatum and hippocampus. This functional magnetic resonance imaging (fMRI) study now contrasted the impact of aversive and appetitive motivation on episodic long-term memory. Cue pictures predicted monetary reward or punishment in alternating experimental blocks. One day later, episodic memory for the cue pictures was tested. We also investigated how the neural processing of appetitive and aversive motivation and episodic memory were modulated by dopaminergic mechanisms. To that end, participants were selected on the basis of their genotype for a variable number of tandem repeat polymorphism of the dopamine transporter (DAT) gene. The resulting groups were carefully matched for the 5-HTTLPR polymorphism of the serotonin transporter gene. Recognition memory for cues from both motivational categories was enhanced in participants homozygous for the 10-repeat allele of the DAT, the functional effects of which are not known yet, but not in heterozygous subjects. In comparison with heterozygous participants, 10-repeat homozygous participants also showed increased striatal activity for anticipation of motivational outcomes compared to neutral outcomes. In a subsequent memory analysis, encoding activity in striatum and hippocampus was found to be higher for later recognized items in 10-repeat homozygotes compared to 9/10-repeat heterozygotes. These findings suggest that processing of appetitive and aversive motivation in the human striatum involve the dopaminergic system and that dopamine plays a role in memory for both types of motivational information. In accordance with animal studies, these data support the idea that encoding of motivational events depends on dopaminergic processes in the hippocampus.
•Effect of reward and punishment on episodic memory is modulated by DAT genotype.•Enhanced recognition memory for motivational cues in DAT 10-repeat homozygotes.•Higher activation at encoding in striatum and hippocampus in DAT 10-repeat homozygotes.
Episodic memory; Reward; Dopamine; Hippocampus; Striatum
Prosody (i.e. speech melody) is an important cue to infer an interlocutor's emotional state, complementing information from face expression and body posture. Inferring fear from face expression is reported as impaired after amygdala lesions. It remains unclear whether this deficit is specific to face expression, or is a more global fear recognition deficit. Here, we report data from two twins with bilateral amygdala lesions due to Urbach-Wiethe syndrome and show they are unimpaired in a multinomial emotional prosody classification task. In a two-alternative forced choice task, they demonstrate increased ability to discriminate fearful and neutral prosody, the opposite of what would be expected under an hypothesis of a global role for the amygdala in fear recognition. Hence, we provide evidence that the amygdala is not required for recognition of fearful prosody.
•Prosody recognition is assessed in two twin sisters with amygdala lesions due to Urbach–Wiethe syndrome.•In a multinomial classification task, there is no impairment.•In a two-alternative forced choice task, patients discriminate fearful and neutral prosody better than a control sample.•This study provides evidence that the amygdala has no general role in fear recognition.
Prosody; Fear; Amygdala; Social cognition; Emotion; Urbach–Wiethe
A dominant focus in studies of learning and decision-making is the neural coding of scalar reward value. This emphasis ignores the fact that choices are strongly shaped by a rich representation of potential rewards. Here, using fMRI adaptation we demonstrate that responses in the human orbitofrontal cortex (OFC) encode a representation of the specific type of food reward predicted by a visual cue. By controlling for value across rewards, and by linking each reward with two distinct stimuli, we could test for representations of reward-identity that were independent of associative information. Our results show reward-identity representations in a medial-caudal region of OFC, independent of the associated predictive stimulus. This contrasts with a more rostro-lateral OFC region encoding reward-identity representations tied to the predicate stimulus. This demonstration of adaptation in OFC to reward specific representations opens an avenue for investigation of more complex decision mechanisms that are not immediately accessible in standard analyses which focus on correlates of average activity.
Activation of the hippocampus is required in order to encode memories for new events (or episodes). Observations from animal studies suggest that for these memories to persist beyond 4 to 6 hours, a release of dopamine generated by strong hippocampal activation is needed. This predicts that dopaminergic enhancement should improve human episodic memory persistence also for events encoded with weak hippocampal activation. Here, using pharmacological fMRI in an elderly population where there is a loss of dopamine neurons as part of normal aging, we show this very effect. The dopamine precursor levodopa led to a dose-dependent (inverted U-shape) persistent episodic memory benefit for images of scenes when tested after 6 hours, independent of whether encoding-related hippocampal fMRI activity was weak or strong (U-shaped dose-response relationship). This lasting improvement even for weakly encoded events supports a role for dopamine in human episodic memory consolidation albeit operating within a narrow dose range.
Optimal decision making requires that we integrate mnemonic information regarding previous decisions with value signals that entail likely rewards and punishments. The fact that memory and value signals appear to be coded by segregated brain regions, the hippocampus in the case of memory and sectors of prefrontal cortex in the case of value, raises the question as to how they are integrated during human decision-making. Using magnetoencephalography (MEG) to study healthy human participants we show increased theta oscillations over frontal and temporal sensors during non-spatial decisions based on memories from previous trials. Using source reconstruction we found that the medial temporal lobe (MTL), in a location compatible with the anterior hippocampus, and the anterior cingulate cortex in the medial wall of the frontal lobe are the source of this increased theta power. Moreover, we observed a correlation between theta power in the MTL source and behavioral performance in decision making supporting a role for MTL theta oscillations in decision making performance. These MTL theta oscillations were synchronized with several prefrontal sources including lateral superior frontal gyrus, dorsal anterior cingulate gyrus and medial frontopolar cortex. There was no relationship between the strength of synchronization and the expected value of choices. Our results indicate a mnemonic guidance of human decision-making, beyond anticipation of expected reward, is supported by hippocampal-prefrontal theta synchronization.
In humans, dopamine is implicated in reward and risk-based decision-making. However, the specific effects of dopamine augmentation on risk evaluation are unclear. Here we sought to measure the effect of 100 mg oral levodopa, which enhances synaptic release of dopamine, on choice behaviour in healthy humans. We use a paradigm without feedback or learning, which solely isolates effects on risk evaluation. We present two studies (n = 20; n = 20) employing a randomised, placebo-controlled, within-subjects design. We manipulated different dimensions of risk in a controlled economic paradigm. We test effects on risk-reward tradeoffs, assaying both aversion to variance (the spread of possible outcomes) and preference for relative losses and gains (asymmetry of outcomes - skewness), dissociating this from potential non-specific effects on choice randomness using behavioural modelling. There were no systematic effects of levodopa on risk attitudes, either for variance or skewness. However, there was a drift towards more risk-averse behaviour over time, indicating that this paradigm was sensitive to detect changes in risk-preferences. These findings suggest that levodopa administration does not change the evaluation of risk. One possible reason is that dopaminergic influences on decision making may be due to changing the response to reward feedback.
•Risk and valence influence choices in decision-making tasks.•Adolescents aged 11–16 took part in a gambling task.•Influences of risk and valence on decisions showed different development patterns.•Risk-aversion remained stable while the influence of valence reduced with age.
Recent research on risky decision-making in adults has shown that both the risk in potential outcomes and their valence (i.e., whether those outcomes involve gains or losses) exert dissociable influences on decisions. We hypothesised that the influences of these two crucial decision variables (risk and valence) on decision-making would vary developmentally during adolescence. We adapted a risk-taking paradigm that provides precise metrics for the impacts of risk and valence. Decision-making in 11–16 year old female adolescents was influenced by both risk and valence. However, their influences assumed different developmental patterns: the impact of valence diminished with age, while there was no developmental change in the impact of risk. These different developmental patterns provide further evidence that risk and valence are fundamentally dissociable constructs and have different influences on decisions across adolescence.
Risk-taking; Loss aversion; Valence; Decision-making; Adolescence
Dopaminergic medication-related Impulse Control Disorders (ICDs) such as pathological gambling and compulsive shopping have been reported in Parkinson disease (PD).
We hypothesized that dopamine agonists (DAs) would be associated with greater impulsive choice, or greater discounting of delayed rewards, in PD patients with ICDs (PDI).
Fourteen PDI patients, 14 PD controls without ICDs and 16 medication-free matched normal controls were tested on (i) the Experiential Discounting Task (EDT), a feedback-based intertemporal choice task, (ii) spatial working memory and (iii) attentional set shifting. The EDT was used to assess impulsivity choice (hyperbolic K-value), reaction time (RT) and decision conflict RT (the RT difference between high conflict and low conflict choices). PDI patients and PD controls were tested on and off DA.
On the EDT, there was a group by medication interaction effect [F(1,26)=5.62; p=0.03] with pairwise analyses demonstrating that DA status was associated with increased impulsive choice in PDI patients (p=0.02) but not in PD controls (p=0.37). PDI patients also had faster RT compared to PD controls F(1,26)=7.51 p=0.01]. DA status was associated with shorter RT [F(3,24)=8.39, p=0.001] and decision conflict RT [F(1,26)=6.16, p=0.02] in PDI patients but not in PD controls. There were no correlations between different measures of impulsivity. PDI patients on DA had greater spatial working memory impairments compared to PD controls on DA (t=2.13, df=26, p=0.04).
Greater impulsive choice, faster RT, faster decision conflict RT and executive dysfunction may contribute to ICDs in PD.
dopamine agonist; gambling; impulse control; Parkinson disease; delay discounting
Mesial temporal lobe epilepsy (mTLE) is the most prevalent form of focal epilepsy, and hippocampal sclerosis (HS) is considered the most frequent associated pathological finding. Recent connectivity studies have shown that abnormalities, either structural or functional, are not confined to the affected hippocampus, but can be found in other connected structures within the same hemisphere, or even in the contralesional hemisphere. Despite the role of hippocampus in memory functions, most of these studies have explored network properties at resting state, and in some cases compared connectivity values with neuropsychological memory scores. Here, we measured magnetoencephalographic responses during verbal working memory (WM) encoding in left mTLE patients and controls, and compared their effective connectivity within a frontotemporal network using dynamic causal modelling. Bayesian model comparison indicated that the best model included bilateral, forward and backward connections, linking inferior temporal cortex (ITC), inferior frontal cortex (IFC), and the medial temporal lobe (MTL). Test for differences in effective connectivity revealed that patients exhibited decreased ipsilesional MTL-ITC backward connectivity, and increased bidirectional IFC-MTL connectivity in the contralesional hemisphere. Critically, a negative correlation was observed between these changes in patients, with decreases in ipsilesional coupling among temporal sources associated with increases contralesional frontotemporal interactions. Furthermore, contralesional frontotemporal interactions were inversely related to task performance and level of education. The results demonstrate that unilateral sclerosis induced local and remote changes in the dynamic organization of a distributed network supporting verbal WM. Crucially, pre-(peri) morbid factors (educational level) were reflected in both cognitive performance and (putative) compensatory changes in physiological coupling.
► Effects of hippocampal sclerosis (HS) on effective connectivity were evaluated. ► HS induced a network adjustment in the ipsilesional and the contralesional hemisphere. ► Strength in ipsilesional and contralesional couplings was negatively related. ► Contralesional couplings were inversely related to performance and level of education. ► These findings support the notion that network alterations underlie cognitive decline.
Hippocampal sclerosis; Working memory; Connectivity; Dynamic causal modelling; Temporal lobe epilepsy
Estimating the value of potential actions is crucial for learning and adaptive behaviour. We know little about how the human brain represents action-specific value outside of motor areas. This is, in part, due to a difficulty in detecting the neural correlates of value using conventional (region of interest) functional magnetic resonance imaging (fMRI) analyses, due to a potential distributed representation of value. We address this limitation by applying a recently developed multivariate decoding method to high-resolution fMRI data in subjects performing an instrumental learning task. We found evidence for action-specific value signals in circumscribed regions, specifically ventromedial prefrontal cortex (vmPFC), putamen, thalamus and insula cortex. By contrast, action-independent value signals were more widely represented across a large set of brain areas. Using multivariate Bayesian model comparison we formally tested whether value–specific responses are spatially distributed or coherent. We find strong evidence that both action-specific and action-independent value signals are represented in a distributed fashion. Our results suggest that a surprisingly large number of classical reward-related areas contain distributed representations of action-specific values, representations that are likely to mediate between reward and adaptive behaviour.
Value-based choices are influenced both by risk in potential outcomes and by whether outcomes reflect potential gains or losses. These variables are held to be related in a specific fashion, manifest in risk aversion for gains and risk seeking for losses. Instead, we hypothesized that there are independent impacts of risk and loss on choice such that, depending on context, subjects can show either risk aversion for gains and risk seeking for losses or the exact opposite. We demonstrate this independence in a gambling task, by selectively reversing a loss-induced effect (causing more gambling for gains than losses and the reverse) while leaving risk aversion unaffected. Consistent with these dissociable behavioral impacts of risk and loss, fMRI data revealed dissociable neural correlates of these variables, with parietal cortex tracking risk and orbitofrontal cortex and striatum tracking loss. Based on our neural data, we hypothesized that risk and loss influence action selection through approach–avoidance mechanisms, a hypothesis supported in an experiment in which we show valence and risk-dependent reaction time effects in line with this putative mechanism. We suggest that in the choice process risk and loss can independently engage approach–avoidance mechanisms. This can provide a novel explanation for how risk influences action selection and explains both classically described choice behavior as well as behavioral patterns not predicted by existing theory.
We constantly look for patterns in the environment that allow us to learn its key regularities. These regularities are fundamental in enabling us to make predictions about what is likely to happen next. The physiological study of regularity extraction has focused primarily on repetitive sequence-based rules within the sensory environment, or on stimulus-outcome associations in the context of reward-based decision-making. Here we ask whether we implicitly encode non-sequential stochastic regularities, and detect violations therein. We addressed this question using a novel experimental design and both behavioural and magnetoencephalographic (MEG) metrics associated with responses to pure-tone sounds with frequencies sampled from a Gaussian distribution. We observed that sounds in the tail of the distribution evoked a larger response than those that fell at the centre. This response resembled the mismatch negativity (MMN) evoked by surprising or unlikely events in traditional oddball paradigms. Crucially, responses to physically identical outliers were greater when the distribution was narrower. These results show that humans implicitly keep track of the uncertainty induced by apparently random distributions of sensory events. Source reconstruction suggested that the statistical-context-sensitive responses arose in a temporo-parietal network, areas that have been associated with attention orientation to unexpected events. Our results demonstrate a very early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. We suggest that this sensitivity provides a computational basis for our ability to make perceptual inferences in noisy environments and to make decisions in an uncertain world.
Survival crucially depends on our ability to extract information from the environment. This ability relies on learning about regularities that enable us to make predictions about what is likely to happen next. Sensitivity to violations of these regularities is necessary for timely reactions and adaptive responses to unexpected, or odd, events. Prior work on speech acquisition and artificial grammar learning has provided important behavioural evidence that humans are able to learn statistical regularities, but it still falls considerably short of providing a biological understanding for how these processes might take place in the brain. The neurophysiological study of regularity extraction has so far been limited, to either sequence-based rules or to simple change-detection paradigms, and thus the neurobiological mechanisms that underpin statistical learning remain unknown. Here we provide both behavioural and neurophysiological evidence to show that humans keep track of the uncertainty in apparently random distributions of events. Our work demonstrates that an early neurophysiological signal underlies the fundamental human ability of learning and making inferences in an uncertain world.
► Metacognition refers to the knowledge we have of our own cognitive processes. ► We investigated the development of metacognition between 11 and 41 years. ► Participants carried out a visual decision task and rated confidence in their decisions. ► While task performance was stable, metacognition improved between 11 and 17. ► Metacognition shows a prolonged developmental trajectory during adolescence.
Introspection, or metacognition, is the capacity to reflect on our own thoughts and behaviours. Here, we investigated how one specific metacognitive ability (the relationship between task performance and confidence) develops in adolescence, a period of life associated with the emergence of self-concept and enhanced self-awareness. We employed a task that dissociates objective performance on a visual task from metacognitive ability in a group of 56 participants aged between 11 and 41 years. Metacognitive ability improved significantly with age during adolescence, was highest in late adolescence and plateaued going into adulthood. Our results suggest that awareness of one’s own perceptual decisions shows a prolonged developmental trajectory during adolescence.
Adolescence; Metacognition; Cognitive development; Self-awareness; Introspection
Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects.