Behavioral changes driven by reinforcement and punishment are referred to as simple or model-free reinforcement learning. Animals can also change their behaviors by observing events that are neither appetitive nor aversive, when these events provide new information about payoffs available from alternative actions. This is an example of model-based reinforcement learning, and can be accomplished by incorporating hypothetical reward signals into the value functions for specific actions. Recent neuroimaging and single-neuron recording studies showed that the prefrontal cortex and the striatum are involved not only in reinforcement and punishment, but also in model-based reinforcement learning. We found evidence for both types of learning, and hence hybrid learning, in monkeys during simulated competitive games. In addition, in both the dorsolateral prefrontal cortex and orbitofrontal cortex, individual neurons heterogeneously encoded signals related to actual and hypothetical outcomes from specific actions, suggesting that both areas might contribute to hybrid learning.
belief learning; decision making; game theory; reinforcement learning; reward
The value of an object acquired by a particular action often determines the motivation to produce that action. Previous studies found neural signals related to the values of different objects or goods in the orbitofrontal cortex, while the values of outcomes expected from different actions are broadly represented in multiple brain areas implicated in movement planning. However, how the brain combines the values associated with various objects and the information about their locations is not known. In this study, we tested whether the neurons in the dorsolateral prefrontal cortex (DLPFC) and striatum in rhesus monkeys might contribute to translating the value signals between multiple frames of reference. Monkeys were trained to perform an oculomotor intertemporal choice in which the color of a saccade target and the number of its surrounding dots signaled the magnitude of reward and its delay, respectively. In both DLPFC and striatum, temporally discounted values (DVs) associated with specific target colors and locations were encoded by partially overlapping populations of neurons. In the DLPFC, the information about reward delays and DVs of rewards available from specific target locations emerged earlier than the corresponding signals for target colors. Similar results were reproduced by a simple network model built to compute DVs of rewards in different locations. Therefore, DLPFC might play an important role in estimating the values of different actions by combining the previously learned values of objects and their present locations.
intertemporal choice; prefrontal cortex; reward; temporal discounting; utility
Game theory analyses optimal strategies for multiple decision makers interacting in a social group. However, the behaviours of individual humans and animals often deviate systematically from the optimal strategies described by game theory. The behaviours of rhesus monkeys (Macaca mulatta) in simple zero-sum games showed similar patterns, but their departures from the optimal strategies were well accounted for by a simple reinforcement-learning algorithm. During a computer-simulated zero-sum game, neurons in the dorsolateral prefrontal cortex often encoded the previous choices of the animal and its opponent as well as the animal's reward history. By contrast, the neurons in the anterior cingulate cortex predominantly encoded the animal's reward history. Using simple competitive games, therefore, we have demonstrated functional specialization between different areas of the primate frontal cortex involved in outcome monitoring and action selection. Temporally extended signals related to the animal's previous choices might facilitate the association between choices and their delayed outcomes, whereas information about the choices of the opponent might be used to estimate the reward expected from a particular action. Finally, signals related to the reward history might be used to monitor the overall success of the animal's current decision-making strategy.
prefrontal cortex; decision making; reward
Monkeys adjust their behavior in response to outcomes that they have observed but not directly experienced, and single neurons within the anterior cingulate cortex respond to these fictive rewards they same way they respond to experienced rewards.
The neural mechanisms supporting the ability to recognize and respond to fictive outcomes, outcomes of actions that one has not taken, remain obscure. We hypothesized that neurons in anterior cingulate cortex (ACC), which monitors the consequences of actions and mediates subsequent changes in behavior, would respond to fictive reward information. We recorded responses of single neurons during performance of a choice task that provided information about the reward values of unchosen options. We found that ACC neurons signal fictive reward information, and use a coding scheme similar to that used to signal experienced outcomes. Thus, individual ACC neurons process both experienced and fictive rewards.
Reward from a particular action is seldom immediate, and the influence of such delayed outcome on choice decreases with delay. It has been postulated that when faced with immediate and delayed rewards, decision makers choose the option with maximum temporally discounted value. We examined the preference of monkeys for delayed reward in a novel inter-temporal choice task and the neural basis for real-time computation of temporally discounted values in the dorsolateral prefrontal cortex. During this task, the locations of the targets associated with small and large rewards and their corresponding delays were randomly varied. We found that prefrontal neurons often encoded the temporally discounted value of reward expected from a particular option. Furthermore, activity tended to increase with discounted values for targets presented in the neuron's preferred direction, suggesting that activity related to temporally discounted values in the prefrontal cortex might determine the animal's behavior during inter-temporal choice.
We investigated how different sub-regions of rodent prefrontal cortex contribute to value-based decision making, by comparing neural signals related to animal’s choice, its outcome, and action value in orbitofrontal cortex (OFC) and medial prefrontal cortex (mPFC) of rats performing a dynamic two-armed bandit task. Neural signals for upcoming action selection arose in the mPFC, including the anterior cingulate cortex, only immediately before the behavioral manifestation of animal’s choice, suggesting that rodent prefrontal cortex is not involved in advanced action planning. Both OFC and mPFC conveyed signals related to the animal’s past choices and their outcomes over multiple trials, but neural signals for chosen value and reward prediction error were more prevalent in the OFC. Our results suggest that rodent OFC and mPFC serve distinct roles in value-based decision making, and that the OFC plays a prominent role in updating the values of outcomes expected from chosen actions.
The process of decision making in humans and other animals is adaptive and can be tuned through experience so as to optimize the outcomes of their choices in a dynamic environment. Previous studies have demonstrated that the anterior cingulate cortex plays an important role in updating the animal’s behavioral strategies when the action-outcome contingencies change. Moreover, neurons in the anterior cingulate cortex often encode the signals related to expected or actual reward. We investigated whether reward-related activity in the anterior cingulate cortex is affected by the animal’s previous reward history. This was tested in rhesus monkeys trained to make binary choices in a computer-simulated competitive zero-sum game. The animal’s choice behavior was relatively close to the optimal strategy, but also revealed small but systematic biases that are consistent with the use of a reinforcement learning algorithm. In addition, the activity of neurons in the dorsal anterior cingulate cortex that was related to the reward received by the animal in a given trial was often modulated by the rewards in the previous trials. Some of these neurons encoded the rate of rewards in previous trials, whereas others displayed activity modulations more closely related to the reward prediction errors. By contrast, signals related to the animal’s choices were only weakly represented in this cortical area. These results suggest that neurons in the dorsal anterior cingulate cortex might be involved in the subjective evaluation of choice outcomes based on the animal’s reward history.
reinforcement learning; game theory; neuroeconomics; decision making; dopamine
Theories of dorsolateral prefrontal cortex (DLPFC) involvement in cognitive function variously emphasize its involvement in rule implementation, cognitive control, or working and/or spatial memory. These theories predict broad effects of DLPFC lesions on tests of visual learning and memory. We evaluated the effects of DLPFC lesions (including both banks of the principal sulcus) in rhesus monkeys on tests of scene learning and strategy implementation that are severely impaired following crossed unilateral lesions of frontal cortex and inferotemporal cortex. Dorsolateral lesions had no effect on learning of new scene problems postoperatively, or on the implementation of preoperatively acquired strategies. They were also without effect on the ability to adjust choice behaviour in response to a change in reinforcer value, a capacity that requires interaction between the amygdala and frontal lobe. These intact abilities following DLPFC damage support specialization of function within the prefrontal cortex, and suggest that many aspects of memory and strategic and goal-directed behaviour can survive ablation of this structure.
episodic; frontal cortex; macaque; memory; strategy
Theories of dorsolateral prefrontal cortex (DLPFC) involvement in cognitive function variously emphasize its involvement in rule implementation, cognitive control, or working and/or spatial memory. These theories predict broad effects of DLPFC lesions on tests of visual learning and memory. We evaluated the effects of DLPFC lesions (including both banks of the principal sulcus) in rhesus monkeys on tests of scene learning and strategy implementation that are severely impaired following crossed unilateral lesions of frontal cortex and inferotemporal cortex. Dorsolateral lesions had no effect on learning of new scene problems postoperatively, or on the implementation of preoperatively acquired strategies. They were also without effect on the ability to adjust choice behavior in response to a change in reinforcer value, a capacity that requires interaction between the amygdala and frontal lobe. These intact abilities following DLPFC damage support specialization of function within the prefrontal cortex, and suggest that many aspects of memory and strategic and goal-directed behavior can survive ablation of this structure.
episodic; frontal cortex; macaque; memory; strategy
The primate prefrontal cortex contributes to stimulus-guided behavior, but the functional specializations among its areas remain uncertain. To better understand such specializations, we contrasted neuronal activity in the dorsolateral prefrontal cortex (PFdl) and the orbital prefrontal cortex (PFo). The task required rhesus monkeys to use a visual cue to choose a saccade target. Some cues instructed the monkeys to repeat their most recent response; others instructed them to change it. Responses were followed by feedback: fluid reward if correct, visual feedback if incorrect. Previous studies, using different tasks, have reported that PFo neurons did not encode responses. We found PFo did encode responses in this task, but only near feedback time, after the response had been completed. PFdl differed from PFo in several respects. As reported previously, some PFdl neurons encoded responses from the previous trial and others encoded planned responses. PFo neurons did not have these properties. After feedback, PFdl encoded rewarded responses better than unrewarded ones and thus combined response and outcome information. PFo, in contrast, encoded the responses chosen, rewarded or not. These findings suggest that PFdl and PFo contribute differently to response knowledge, with PFo using an outcome-independent signal to monitor current responses at feedback time.
Decision; feedback; monitoring; evaluation; frontal lobe; prefrontal cortex
Functional impairment of the orbital and medial prefrontal cortex underlies deficits in executive control that characterize addictive disorders, including alcohol addiction. Previous studies indicate that alcohol alters glutamate neurotransmission and one substrate of these effects may be through the reconfiguration of the subunits constituting ionotropic glutamate receptor (iGluR) complexes. Glutamatergic transmission is integral to cortico-cortical and cortico-subcortical communication and alcohol-induced changes in the abundance of the receptor subunits and/or their splice variants may result in critical functional impairments of prefrontal cortex in alcohol dependence. To this end, the effects of chronic ethanol self-administration on glutamate receptor ionotropic AMPA (GRIA) subunit variant and kainate (GRIK) subunit mRNA expression were studied in the orbitofrontal cortex (OFC), dorsolateral prefrontal cortex (DLPFC), and anterior cingulate cortex (ACC) of male cynomolgus monkeys. In DLPFC, total AMPA splice variant expression and total kainate receptor subunit expression were significantly decreased in alcohol drinking monkeys. Expression levels of GRIA3 flip and flop and GRIA4 flop mRNAs in this region were positively correlated with daily ethanol intake and blood ethanol concentrations (BEC) averaged over the 6 months prior to necropsy. In OFC, AMPA subunit splice variant expression was reduced in the alcohol treated group. GRIA2 flop mRNA levels in this region were positively correlated with daily ethanol intake and BEC averaged over the 6 months prior to necropsy. Results from these studies provide further evidence of transcriptional regulation of iGluR subunits in the primate brain following chronic alcohol self-administration. Additional studies examining the cellular localization of such effects in the framework of primate prefrontal cortical circuitry are warranted.
ethanol; AMPA; kainate; messenger RNA; prefrontal cortex; qPCR; primate
Our choices often require appropriate actions in order to obtain a preferred outcome, but the neural underpinnings that link decision making and action selection remain largely undetermined. Recent theories propose that action selection occurs simultaneously, i.e. parallel in time, with the decision process. Specifically, it is thought that action selection in motor regions originates from a competitive process which is gradually biased by evidence signals originating in other regions, such as those specialized in value computations. Biases reflecting the evaluation of choice options should thus emerge in the motor system before the decision process is complete. Using transcranial magnetic stimulation, we sought direct physiological evidence for this prediction by measuring changes in cortico-spinal excitability in human motor cortex during value-based decisions. We found that excitability for chosen versus unchosen actions distinguishes the forthcoming choice before completion of the decision process. Both excitability and reaction times varied as a function of the subjective value-difference between chosen and unchosen actions, consistent with this effect being value-driven. This relationship was not observed in the absence of a decision. Our data provide novel evidence in humans that internally generated value-based decisions influence the competition between action representations in motor cortex before the decision process is complete. This is incompatible with models of serial processing of stimulus, decision, and action.
The mediodorsal thalamus is a major input to the prefrontal cortex and is thought to modulate cognitive functions of the prefrontal cortex. Damage to the medial, magnocellular part of the mediodorsal thalamus (MDmc) impairs cognitive functions dependent on prefrontal cortex, including memory. The contribution of MDmc to other aspects of cognition dependent on prefrontal cortex has not been determined. The ability of monkeys to adjust their choice behavior in response to changes in reinforcer value, a capacity impaired by lesions of orbital prefrontal cortex, can be tested in a reinforcer devaluation paradigm. In the present study, rhesus monkeys with bilateral neurotoxic MDmc lesions were tested in the devaluation procedure. Monkeys learned visual discrimination problems in which each rewarded object is reliably paired with one of two different food rewards, and then were given choices between pairs of rewarded objects, one associated with each food. Selective satiation of one of the food rewards reduces choices of objects associated with that food in normal monkeys. Monkeys with bilateral neurotoxic lesions of MDmc learned concurrently-presented visual discrimination problems as quickly as unoperated control monkeys, but showed impaired reinforcer devaluation effects. This finding suggests that the neural circuitry for control of behavioral choice by changes in reinforcer value includes MDmc.
amygdala; choice behavior; decision-making; devaluation; medial thalamus; orbitofrontal cortex; prefrontal cortex; reward
Contingency theories of goal-directed action propose that experienced disjunctions between an action and its specific consequences, as well as conjunctions between these events, contribute to encoding the action-outcome association. Although considerable behavioral research in rats and humans has provided evidence for this proposal, relatively little is known about the neural processes that contribute to the two components of the contingency calculation. Specifically, while recent findings suggest that the influence of action-outcome conjunctions on goal-directed learning is mediated by a circuit involving ventromedial prefrontal, medial orbitofrontal cortex and dorsomedial striatum, the neural processes that mediate the influence of experienced disjunctions between these events are unknown. Here we show differential responses to probabilities of conjunctive and disjunctive reward deliveries in the ventromedial prefrontal cortex, the dorsomedial striatum, and the inferior frontal gyrus. Importantly, activity in the inferior parietal lobule and the left middle frontal gyrus varied with a formal integration of the two reward probabilities, ΔP, as did response rates and explicit judgments of the causal efficacy of the action.
fMRI; Operant; Learning; Reward; Caudate; Cortex
Both norepinephrine and acetylcholine have been shown to be critically involved in mediating attention but there remains debate about whether they serve similar or unique functions. Much of what is known about the role of these neurochemicals in cognition is based on manipulations done at the level of the cell body but these findings are difficult to reconcile with data regarding the unique contribution of cortical subregions, e.g. the dorsolateral prefrontal cortex, to attention. In the current study, we directly compared the effects of noradrenergic and cholinergic deafferentation of the rat medial prefrontal cortex, the homologue of primate dorsolateral prefrontal cortex, using an intradimensional/extradimensional attentional set shifting task, a task previously shown to be able to dissociate the function of the primate dorsolateral prefrontal cortex from orbitofrontal cortex. We found that noradrenergic, but not cholinergic, deafferentation produces specific impairments in the ability to shift attentional set. We also clarified the nature of the attentional deficits by assessing the ability of rats to disregard irrelevant stimuli. Noradrenergic lesions did not alter the ability of rats to ignore irrelevant stimuli, suggesting that the attentional deficit results from an overly focused attentional state that retards learning that a new stimulus dimension predicts reward.
DBH saporin; 192 IgG saporin; infralimbic/prelimbic cortices; selective attention
Abstract behavior-guiding rules and strategies allow monkeys to avoid errors in rarely encountered situations. In the present study, we contrasted strategy-related neuronal activity in the dorsolateral prefrontal cortex (PFdl) and the orbital prefrontal cortex (PFo) of rhesus monkeys. On each trial of their behavioral task, the monkeys responded to a foveal visual cue by making a saccade to one of two spatial targets. One response required a leftward saccade; the other required a saccade of equal magnitude to the right. The cues instructed the monkeys to follow one of two response strategies: to stay with their most recent successful response or to shift to the alternative response. Neurons in both areas encoded the stay and shift strategies after the cue appeared, but there were three major differences between the PFo and the PFdl: (1) many strategy-encoding cells in PFdl also encoded the response (left or right), but few, if any, PFo cells did so; (2) strategy selectivity appeared earlier in PFo than in PFdl; and (3) on error trials, PFo neurons encoded the correct strategy—the one that had been cued but not implemented—whereas in PFdl the strategy signals were weak or absent on error trials. These findings indicate that PFo and PFdl both contribute to behaviors guided by abstract response strategies, but do so differently, with PFo encoding a strategy and PFdl encoding a response based on a strategy.
orbitofrontal cortex; frontal lobe; cognitive neurophysiology; rules; decisions; abstract cognition
Studies investigating response reversal consistently implicate regions of medial and lateral prefrontal cortex when reinforcement contingencies change. However, it is unclear from these studies how these regions give rise to the individual components of response reversal, such as reinforcement value encoding, response inhibition, and response change. Here we report a novel instrumental learning task designed to determine whether regions implicated in processing reversal errors are uniquely involved in this process, or whether they play a more general role in representing response competition, reinforcement value, or punishment value in the absence of demands for response change. In line with previous findings, reversal errors activated orbitofrontal cortex, dorsomedial prefrontal cortex, ventrolateral prefrontal cortex, caudate, and dorsolateral prefrontal cortex. These regions also showed increased activity to errors in the absence of contingency changes. In addition, ventrolateral PFC, caudate, and dorsolateral PFC each exhibited increased activity following correct reversals. Activity in these regions was not significantly modulated by changes in reinforcement value that were not sufficient to make an alternative response advantageous. These data do not support punishment-processing or prepotent reponse inhibition accounts of ventrolateral prefrontal cortex function. Instead, they support recent conceptualizations of ventrolateral prefrontal cortex function that implicate this region in resolving response competition by manipulating the representation of either motor response options, or object features. These data also suggest that dorsolateral prefrontal cortex plays a role in reversal learning, probably through top down attentional control of object or reinforcement features when task demands increase.
Response reversal; affective shift; response competition; ventrolateral prefrontal cortex; decision-making
To compare with our previous findings on relative-duration discrimination, we studied prefrontal cortex activity as monkeys performed a relative-distance discrimination task. We wanted to know whether the same parts of the prefrontal cortex compare durations and distances and, if so, whether they use similar mechanisms. Two stimuli appeared sequentially on a video screen, one above a fixed reference point, the other below it by a different distance. After a delay period, the same two stimuli reappeared (as choice stimuli), and the monkeys’ task was to choose the one that had appeared farther from the reference point during its initial presentation. We recorded from neurons in the dorsolateral prefrontal cortex (area 46) and the caudal prefrontal cortex (area 8). Although some prefrontal neurons encoded the absolute distance of a stimulus from the reference point, many more encoded relative distance. Categorical representations (‘farther’), predominated over parametric ones (‘how much farther’). Relative-distance coding was most often abstract, coding the farther or closer stimulus to the same degree, independent of its position on the screen. During the delay period before the choice stimuli appeared, feature-based coding supplanted order-based coding, and position-based coding—always rare—decreased to chance levels. The present results closely resembled those for a duration-discrimination task in the same cortical areas. We conclude, therefore, that these areas contribute to decisions based on both spatial and temporal information.
frontal lobe; decision-making; spatial perception; temporal perception; working memory
Successful control of affect partly depends on the capacity to modulate negative emotional responses through the use of cognitive strategies (i.e., reappraisal). Recent studies suggest the involvement of frontal cortical regions in the modulation of amygdala reactivity and the mediation of effective emotion regulation. However, within-subject inter-regional connectivity between amygdala and prefrontal cortex in the context of affect regulation is unknown. Here, using psychophysiological interaction analyses of functional magnetic resonance imaging data, we show that activity in specific areas of the frontal cortex (dorsolateral, dorsal medial, anterior cingulate, orbital) covaries with amygdala activity and that this functional connectivity is dependent on the reappraisal task. Moreover, strength of amygdala coupling with orbitofrontal cortex and dorsal medial prefrontal cortex predicts the extent of attenuation of negative affect following reappraisal. These findings highlight the importance of functional connectivity within limbic-frontal circuitry during emotion regulation.
emotion; fMRI; functional connectivity; psychophysiological interaction; amygdala; prefrontal; regulation; reappraisal
The ability to apply behavioral strategies to obtain rewards efficiently and make choices based on changes in the value of rewards is fundamental to the adaptive control of behavior. The extent to which different regions of prefrontal cortex are required for specific kinds of decisions is not well-understood. We tested rhesus monkeys with bilateral ablations of ventrolateral prefrontal cortex on tasks that required the use of behavioral strategies to optimize the rate with which rewards were accumulated, or to modify choice behavior in response to changes in the value of particular rewards. Monkeys with ventrolateral prefrontal lesions were impaired in performing the strategy-based task, but not on value-based decision making. In contrast, orbital prefrontal ablations produced the opposite impairments in the same tasks. These findings support the conclusion that independent neural systems within the prefrontal cortex are necessary for control of choice behavior based on strategies or on stimulus value.
macaque; monkey; prefrontal cortex; reward; reinforcement; rules
The ability to apply behavioral strategies to obtain rewards efficiently and make choices based on changes in the value of rewards is fundamental to the adaptive control of behavior. The extent to which different regions of the prefrontal cortex are required for specific kinds of decisions is not well understood. We tested rhesus monkeys with bilateral ablations of the ventrolateral prefrontal cortex on tasks that required the use of behavioral strategies to optimize the rate with which rewards were accumulated, or to modify choice behavior in response to changes in the value of particular rewards. Monkeys with ventrolateral prefrontal lesions were impaired in performing the strategy-based task, but not on value-based decision-making. In contrast, orbital prefrontal ablations produced the opposite impairments in the same tasks. These findings support the conclusion that independent neural systems within the prefrontal cortex are necessary for control of choice behavior based on strategies or on stimulus value.
macaque; monkey; prefrontal cortex; reinforcement; reward; rules
Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters.
Studies using brain imaging methods have shown that neuronal activity in the orbitofrontal cortex, a brain area thought to promote the ability to control behavior according to likely outcomes or consequences, is altered in drug addicts. These human imaging findings have led to the hypothesis that core features of addiction like compulsive drug use and drug relapse are mediated in part by drug-induced changes in orbitofrontal function. Here, we discuss results from laboratory studies using rats and monkeys on the effect of drug exposure on orbitofrontal-mediated learning tasks and on neuronal structure and activity in orbitofrontal cortex. We also discuss results from studies on the role of the orbitofrontal cortex in drug self-administration and relapse. Our main conclusion is that while there is clear evidence that drug exposure impairs orbitofrontal-dependent learning tasks and alters neuronal activity in orbitofrontal cortex, the precise role these changes play in compulsive drug use and relapse has not yet been established.
drug cues; orbitofrontal cortex; reinstatement; relapse; reversal learning; stress
We used neuropsychological tasks to investigate integrity of brain circuits linking orbitofrontal cortex and amygdala (orbitofrontal-amygdala), and dorsolateral prefrontal cortex and hippocampus (dorsolateral prefrontal-hippocampus), in 138 individuals aged 7 – 18 years, with and without autism. We predicted that performance on orbitofrontal-amygdala tasks would be poorer in the Autism group compared to the Non-Autism group regardless of intellectual level (verbal mental age – VMA) and that performance on dorsolateral prefrontal-hippocampus tasks would be associated primarily with intellectual level. Predicted differences between Autism and Non-Autism groups on orbitofrontal-amygdala tasks were present but greater in individuals with higher VMA. On dorsolateral prefrontal-hippocampus tasks, poorer performance by the Autism compared to the Non-Autism group was found at all VMA levels. Group differences suggest both brain circuits are impaired in autism, but performance on all tasks is also associated with intellectual level.
Oestrogen modulates cognitive function and affective behaviours subserved by the prefrontal cortex (PFC). Identifying and localising oestrogen receptor (ER)α, in human PFC will contribute to our understanding of the molecular mechanism of oestrogen action in this region. Inferences about the site of action of oestrogen in human brain are derived largely from studies performed in nonhuman mammalian species; however, the congruence of findings across species has not been demonstrated. Furthermore, the laminar, cellular, and subcellular localisation of ERα in the cortex is debated. Therefore, we compared the distribution of ERα in human dorsolateral prefrontal cortex (DLPFC) with that of monkey DLPFC and rat medial PFC. Immunohistochemistry performed on frontal cortex from the three species demonstrated ERα positive cells throughout all layers of the PFC, in pyramidal and nonpyramidal neurones, with both nuclear and cytoplasmic immunoreactivity. Western blot analyses and preabsorption studies confirmed that the antibody used recognised ERα and not ERβ. A strong ERα immunoreactive band corresponding to the full-length ERα protein (65–67 kDa) in the frontal cortex of all three species matched the size of the predominant immunoreactive band detected in breast cancer cell lines known to express ERα. Additionally, other ERα immunoreactive proteins of varying molecular weight in breast cancer cells, rat ovary and mammalian brain were detected, suggesting that ERα may exist in more than one form in the mammalian frontal cortex. The present study provides evidence that ERα protein exists in neurones in mammalian PFC and that ERα is anatomically well-positioned to directly mediate oestrogen action in these neurones.
oestrogen receptor; brain; human post-mortem; monkey; rat