Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Trends Neurosci. Author manuscript; available in PMC 2010 October 1.
Published in final edited form as:
PMCID: PMC2755633

Roles for nigrosriatal—not just mesocorticolimbic—dopamine in reward and addiction


Forebrain dopamine circuitry has traditionally been studied by two largely independent specialist groups: students of Parkinson’s disease who study the nigrostriatal dopamine system that originates in the substantia nigra (SN), and students of motivation and addiction who study the role of the mesolimbic and mesocortical dopamine systems that originate in the ventral tegmental area (VTA). The anatomical evidence for independent nigrostriatal and mesolimbic dopamine systems has, however, long been obsolete. There is now compelling evidence that both nominal “systems” participate in reward function and addiction. Electrical stimulation of both SN and VTA is rewarding, blockade of glutamatergic or cholinergic input to either SN or VTA attenuates the habit-forming effects of intravenous cocaine, and dopamine in both nigrostriatal and mesocorticolimbic terminal fields participates in the defining property of rewarding events: the reinforcement of memory consolidation. Thus the similarities between nigrostriatal and mesolimbic dopamine systems may be as important as their differences.


The midbrain dopamine neurons that project to the forebrain were initially identified as a single continuous layer [1], arising from a single embryological cell group [2]. However, perhaps because the lateral and medial portions were largely restricted to established brain regions—the dorsal substantia nigra (SN) and the ventral tegmental area of Tsai (VTA)—the lateral and medial portions of this layer were given different labels (A9 and A10, respectively) and eventually became identified with two distinct nominal systems (a nigrostriatal system and a mesolimbic system [3]. The two nominal systems in turn became identified with different functions: the nigrostriatal system—known to degenerate in Parkinson’s disease—with motor function, and the mesolimbic system—important for the habit-forming effects of cocaine and for approach behaviors—with motivation and reward function. While the simple dichotomy between a nigrostriatal motor system and a meso-limbic reward and motivational system has had long-lasting influence, it is a misleading dichotomy. The nigrostriatal and mesolimbic dopamine “systems” are not simply differentiated anatomically, and significant functional interactions between the two systems have been recently suggested [4, 5]. Here I discuss the anatomical complications and lines of functional evidence that the substantia nigra dopamine neurons, and not just those of the ventral tegmental area, play a significant role in reward and addiction.

Anatomical complications

As was noted from the start [1], there is no clear boundary between the two nominal midbrain dopamine systems (Figure 1). Anterograde and retrograde tracing studies show that the SN and VTA dopamine cells have overlapping, not distinct, projection fields [6]. The discovery of dopamine terminals in prefrontal cortex—initially thought to arise uniquely from the VTA—prompted postulation of a third (mesocortical) or an expanded (mesocorticolimbic) system, but projections to the prefrontal cortex were subsequently found to arise from the medial SN as well as from the VTA [6]. Further, what is usually referred to SN pars compacta (SNc) is now known to comprise two layers or “tiers,” a ventral tier projecting mainly to the dorsal striatum, and a dorsal tier projecting to limbic [7] as well as striatal [8] targets. It is only the ventral tier or “densocellular layer” that has the tight packing appropriate for the label “pars compacta” [9]. The most lateral dopaminergic neurons, in SN pars lateralis (SNl, a loosely compacted lateral extension of SNc, and thus more similar to the dorsal than to the ventral tier of SNc) projects not to the dorsal striatum but rather to a traditionally limbic target (but see [10]): the amygdala [6]. Thus it is no longer possible to think of the limbic and striatal dopamine systems as arising from anatomically distinct lateral (SN) and medial (VTA) origins.

Figure 1
Coronal sections of the ventral midbrain of the rat. (a) The blue line outlines the layer of dopaminergic cell bodies and dendrites (dark brown) as revealed by tyrosine hydroxylase immunohistochemistry. The green line outlines the GABAergic cell bodies ...

Moreover, the terminal fields also fail to segregate simply. The major terminal field of the mesolimbic dopamine system—the unfortunately named nucleus accumbens septi—is not an unambiguously limbic structure. While some early investigators assumed accumbens to share olfactory functions (smell brain) with the septum and olfactory tubercle, others saw accumbens as a ventral extension of the striatum. The major portion of accumbens, the core subregion, is now understood to be an extension of the striatum and is, with the olfactory tubercle, known as the ventral striatum [11]. The medial “shell” portion of accumbens, however, can still be considered “limbic” as it is continuous with, has similar connections to, and is now often considered part of an extended delineation of the amygdala [11]. Part of the problem is that the so-called limbic system has never been consistently defined.

The nigrostriatal and mesolimbic dopamine systems are not anatomically distinct at either the level of their cells of origin or at the level of their terminal fields. Thus it should not be surprising that they are also not distinct functionally. In the following sections I discuss functional overlap between the two systems with respect to brain stimulation reward, cocaine reward, reward prediction, and reinforcement of memory consolidation.

Brain Stimulation Reward

The discovery that rats and humans will learn to work for direct electrical stimulation of certain brain regions (termed “intracranial self-stimulation”), coupled with the observation that humans find such stimulation pleasurable, led to the mapping of reward-related circuitry throughout the brain. Rats learn to work for stimulation of dozens of brain sites, many of which are presumably linked in series to form one or more reward-related circuits or sub-circuits. The most unambiguous self-stimulation involves stimulation sites along the medial forebrain bundle (MFB).

Dopamine antagonists (neuroleptics) attenuate the rewarding effects of MFB stimulation at doses that do not cause significant motoric debilitation [12, 13]. Conversely, the indirect dopamine agonist amphetamine enhances the rewarding effects of the stimulation and antagonizes the effects of neuroleptics in a manner that can be differentiated from simple augmentation of motor function [14]. Thus some subset of midbrain dopamine neurons plays a critical role in the rewarding effects of medial forebrain bundle electrical stimulation. While it was originally thought that rewarding MFB stimulation activated dopamine fibers directly, it is now believed that—at traditional stimulation parameters—it selectively activates afferent input to the dopamine system [15]. Thus the midbrain dopamine neurons are currently seen as a final common path for the rewarding effects of MFB stimulation.

While it is the mesocorticolimbic dopamine system that is most frequently associated with brain stimulation reward, the dopaminergic fibers from the SN and the VTA each enter and project along the MFB [3], and reward sites are found in both SN and VTA [16, 17]. Within-subjects mapping with a moveable electrode confirms that the boundaries of these reward sites correspond to the boundaries of the dopaminergic cell body region of SN and VTA, including SN pars compacta (SNc) and SN pars lateralis [18, 19]. The refractory periods for the directly depolarized substrate of ventral midbrain self-stimulation are similar to those of MFB self-stimulation, suggesting again that what are activated directly in this region are the sensitive terminals of the dopamine afferents rather than the relatively insensitive dopamine neurons themselves.

Moveable-electrode mapping studies also confirm that stimulation in the primary terminal fields of both the nigrostriatal and the mesocorticolimbic systems is rewarding [20, 21]. While there are regional differences in thresholds, response rates, and time to learn, these differences are graded across both dorsal (caudate) and ventral (accumbens, olfactory tubercle) striatum. Threshold and time to learn are not significantly different between nucleus accumbens and the adjacent anterior ventromedial caudate-putamen; indeed the differences between these two subregions of the striatum are no greater than any other adjacent pair of striatal subregions [21]. In the region of the dopamine cell bodies, thresholds between the best SNc and VTA placements are comparable. Response rates associated with VTA and SN stimulation differ considerably, but this appears to be largely influenced by motor (turning) artifacts due to spread of current dorsal to SNc [18]. Moreover, differences in response rate associated with positive sites in SNc and VTA are minimal compared to differences associated with sites within SNc, where stimulation at rostral placements tend to be positive and stimulation at caudal placements tends to be ineffective [18]. Thus comparisons of the effects of rewarding electrical stimulation between SN and VTA, like those between the dorsal (caudate-putamen) and ventral (nucleus accumbens), implicate both the nigrostriatal and the mesolimbic systems in reward function.

Intravenous cocaine reward

Early studies of intravenous cocaine self-administration emphasized the ventral striatum (nucleus accumbens) as a substrate of cocaine reward. First, like the effects of systemic dopamine receptor blockade, dopamine-specific lesions of the ventral striatum were found to disrupt self-administration of intravenous cocaine and amphetamine. This, of course, did not rule out a similar role for the dorsal striatum; in neither case were dorsal striatal lesions tested for comparison. Small non-specific lesions of the dorsal striatum had been shown to affect intravenous morphine reward [22], and recent examination of the effects of small lesions within the dorsal striatum (caudate), reveals decreases in progressive ratio responding for either intravenous cocaine or intravenous heroin [23]. Thus even lesion studies suggest some degree of involvement of the dorsal striatum in drug reward. Early lesion studies also implicated dopaminergic neurons of the VTA in cocaine self-administration, but lesions of the SN have not been tested, perhaps because complete lesions of this structure render animals aphagic, adipsic, and akinetic.

Whereas high systemic doses of neuroleptics cause extinction of intravenous cocaine self-administration—with an initial increase in response rate (“frustration” responding) followed by response cessation—the effect of low and moderate doses of systemic neuroleptics is simply to accelerate responding. The increased intake of cocaine caused by low doses of neuroleptics is interpreted as reflecting decreased rewarding efficacy the drug; the animals appear to take more cocaine, over a wide range of doses per injection, in compensation for competitive dopamine antagonists. Thus they maintain increased concentrations of cocaine in the blood (and increased concentrations of dopamine in the brain) when the dopamine system is partially blocked. This interpretation of response rate increases—reduced rewarding drug impact—is confirmed by progressive ratio studies in which the animal is required to respond progressively more for each successive injection. Here, the measure of drug reward is how long the animal will continue to respond as the “price” of reward escalates. The highest response requirement that the animal meets before ceasing to respond is termed the “break point,” and neuroleptics reduce such break points. Thus despite their inclination to take more cocaine when it is free, neuroleptic-treated animals are willing to pay a lower price for cocaine when they have to work for it [24]. The decrease in break point confirms that cocaine is less rewarding even when animals show the motivation to take more of it.

The same pattern—increased responding for cocaine under fixed ratio schedules contrasted with decreased responding under progressive ratio schedules—is seen when neuroleptics are infused locally into nucleus accumbens [25, 26]. Dorsal striatal dopamine antagonists also increase response rate on fixed ratio schedules, although they do so less effectively and with greater latency than do ventral striatal antagonists [27]. This was suggested to reflect diffusion of the antagonist to the ventral striatum, but it may merely reflect that greater spread within the dorsal striatum is required because of its greater volume or because of weaker sensitivity. Recent studies show that dorsal [28] but not ventral [29] striatal neuroleptic microinjections attenuate cocaine-seeking in animals tested with a second-order schedule, a sensitive schedule that depends heavily on the contribution of conditioned reinforcement by cocaine-associated cues.

While the evidence for dorsal striatal participation in cocaine reward is relatively weak (compare [28] with [27]), stronger evidence comes from comparisons of VTA and SN manipulations. First, like systemic or ventral striatal treatments, microinjections of a D1-type dopamine antagonist into the VTA increase fixed-ratio cocaine self-administration and decrease progressive ratio responding [30]. The D1-type receptors in the VTA are on the terminals of GABAergic and glutamatergic afferents to this region, and because presynaptic control of glutamate release by D1 agonists is an order of magnitude more sensitive than presynaptic control of GABA release [31], the dominant local mechanism of D1 antagonists appears to involve a decrease in excitatory drive to the dopamine neurons. Glutamate input to the VTA activates VTA dopamine neurons as reflected in release of dopamine from dopaminergic dendrites [32], a correlate of dopaminergic cell firing and of dopamine release in nucleus accumbens [3335]. Blockade of VTA glutamate receptors, like blockade of VTA D1-type receptors, increases cocaine self-administration on fixed-ratio schedules but attenuates responding under extinction conditions, presumably by blocking a glutamatergic contribution of reward-predictive stimuli that serve as conditioned incentive stimuli or conditioned reinforcers [32].

For the thesis of the present paper, the important point is that similar effects on fixed ratio responding are seen when the D1-type antagonist is microinjected into the SN [36]. The same doses of the D1 antagonist are effective in both VTA [30] and SN [36], and the effects of antagonists at the two sites have similar latencies. In the studies cited, the tips of the angled VTA injection cannulae and the vertical SN cannulae were 1.5 mm apart, and control injections 1mm dorsal or rostral to effective sites were ineffective; thus it is very unlikely that the antagonist injected to the SN diffused to a site of action in the VTA. Rather, it appears that by attenuating excitatory input to SN dopamine neurons, antagonism of D1 dopamine receptors in the SN attenuates the rewarding impact of intravenous cocaine and does so in a similar manner to antagonism of D1 dopamine receptors in the VTA.

Cholinergic manipulations of SN and VTA further suggest contributions of both regions to the habit-forming—and thus the addictive—actions of cocaine. Cholinergic projections from the laterodorsal and pedunculopontine tegmental nuclei provide a second source of excitatory input to VTA dopamine neurons [37], and acetylcholine release in the VTA and the SN are each elevated during intravenous cocaine self-administration. This elevation is associated with increased dopaminergic activation (as reflected in local dendritic dopamine release) that is attenuated by local cholinergic antagonists [38]. As with D1 dopamine antagonists, cholinergic antagonists microinjected into either SN or VTA increase fixed ratio responding for cocaine; they each also increase local dendritic dopamine release [38]. Cholinergic antagonists injected into either SN or VTA—like glutamate antagonists injected into the same regions [32]—decrease responding for cocaine-predictive conditioned stimuli under extinction conditions [38]. As with the effects of the D1 dopamine antagonists, the effects of the cholinergic antagonists are assumed to reflect decreased excitatory input to the SN and VTA dopamine neurons. Thus, again, manipulations of SN dopamine cells, like those of VTA dopamine cells, modify the rewarding impact of intravenous cocaine and its associated cues [38].

Reward prediction

If the activation of dopaminergic neurons plays a significant role in reward function, these neurons should be responsive to the presentation of rewarding stimuli. That they are has been well established by electrophysiological studies; neurons presumed to be dopaminergic by their electrophysiological characteristics are excited by unexpected food rewards [39]. When, after many repeated trials, an animal learns that some distal environmental stimulus reliably predicts the presentation of reward, dopaminergic neurons come to respond to the distal predictive stimulus and cease responding to the proximal reward signal itself [39]. The distal signal, in effect, becomes the reward signal. The responses of dopamine neurons to reward-predictive stimuli occur prior to any movements that the distal stimuli evoke and thus do not reflect motor commands. Despite the fact that dopaminergic neurons cease to be excited by predicted rewards as they come to be excited by reward-predictors, their activity remains sensitive to reward events: if a reward is expected but not presented, the dopamine cell that responded to the reward predictor goes briefly silent [39]. While such studies only identify correlates, not causes, of reward function, these characteristics of dopaminergic responsiveness fit remarkably well with what is required of a reinforcement signal. Phasic dopamine activation occurs when there is something to learn (the association between a reward and an earlier predictor); dopaminergic pacemaker firing continues normally when there is nothing to be learned (when a reward occurs that was fully predicted); and phasic inhibition of dopaminergic firing occurs when a reward-predictor fails (and should no longer be reinforced).

For the present thesis, the important finding is that such responses are seen in both SN and VTA dopamine neurons. Despite several kinds of heterogeneity both between SN and VTA dopamine neurons [9, 4042] and also within SN [43] and VTA [4446] dopamine groupings, subsets of both SN and VTA dopamine neurons show the characteristic responses to reward and reward-predictive stimuli and to reward omission [39]. Indeed, the responses of SN and VTA neurons to rewards and reward-predictors are so indistinguishable that the electrophysiological data from cells in the two regions are traditionally pooled [39]. SN and VTA dopamine neurons share the same qualifying characteristics for whatever role dopamine neurons play in reward function.

The same can be said for the primary target neurons of the SN and VTA dopamine populations. Target neurons in the primary terminal field of the SN—dorsal striatal medium spiny neurons—and target neurons in the primary terminal field of the VTA—ventral striatal medium spiny neurons—respond similarly to rewards [47] and reward-predictors [48]. Thus, at each end of the both the nigrostriatal and the mesolimbic dopamine systems, significant populations dopamine target-neurons are implicated in the processing of reward and reward-predictor information.

Reinforcement and memory consolidation

Those interested in the elusive mechanism of the essential feature of reward function—reinforcement—have studied it at both the behavioral and the cellular level. The concept of reinforcement was first and most clearly articulated by Thorndike [49], who initially described it as the “stamping-in” of associations and subsequently elaborated it as “the aftereffects of a connection upon it.” In his view of reinforcement as strengthening the connections between neurons that have been recently active, Thorndike anticipated the “Hebb Synapse” that is central to modern notions of the cellular basis of memory. At the cellular level, reinforcement is the stamping-in of functional connections between synaptic inputs and synaptic outputs. In its most fundamental sense, reinforcement refers to any treatment that enhances the permanence or “consolidation” of a memory trace in the nervous system [50]. While reward stimuli do more than simply reinforce learning (they also attract and motivate [51]), the stamping in of memory traces is their core characteristic [49]. In both the rapid time-frame of cellular models of synaptic potentiation and the more prolonged time-frame of consolidation of behavioral learning and memory, dopamine has an important and frequently necessary role. And at each of these levels of analysis, dopamine in the nigrostriatal system and dopamine in the mesolimbic system have each been implicated.

The most elemental demonstration of the stamping-in of memory consolidation is in behavioral paradigms where the reinforcing event is presented after the learning trial and is irrelevant to the task or association that is learned. Using this approach, Krivanek and McGaugh [52] established, for example, that amphetamine injections given after (post-trial) the training session can enhance the subsequent memory for food-rewarded discrimination learning. In this case the reinforcer of memory consolidation (amphetamine) was not only irrelevant but, as an appetite-suppressant drug, seemingly antithetical to the task. Nonetheless, when given after the task amphetamine enhanced consolidation of the association between the food and the food-predictive discriminative stimulus. Similarly, post-trial sucrose [53] or post-trial rewarding brain stimulation [54, 55] can enhance memory consolidation in avoidance tasks where the glucose or stimulation is irrelevant to the task itself. If amphetamine, glucose, or rewarding brain stimulation are given in the hour after a significant learning event, consolidation of the relevant memories will be stronger and the evidence of learning will, when subsequently tested, be more evident.

The post-trial enhancement of memory consolidation by rewarding electrical stimulation is more clearly linked to stimulation of the origins of the nigrostriatal than of the mesocorticolimbic dopamine system [56]. Other post-trial manipulations of dopamine systems implicate both mesolimbic and nigrostriatal dopamine actions in the stamping in of memory consolidation. Post-trial administration of either amphetamine or a selective D2-type dopamine agonist enhances retention of a win-stay radial maze habit when injected into the hippocampus and enhances retention of a win-shift habit when injected into the dorsal striatum [57, 58]. Memory consolidation in a Pavlovian autoshaping task is disrupted by post-trial administration of a D1 antagonist into the ventral striatum [59]. While the consolidation of memory in different kinds of task appears to be influenced differentially by dorsal striatal and limbic manipulations—suggesting that the memory for different associations is laid down in different sub-regions of the dopamine terminal fields—memory function itself is clearly affected by manipulations of both nigro-striatal and mesolimbic projection targets.

At the neuronal level, two forms of modification of synaptic input-output relations serve as cellular models of learning and memory: long-term potentiation (LTP) and long-term depression (LTD) of glutamatergic neurotransmssion. LTP and LTD occur at synapses in widespread regions of the brain, including striatal limbic and cortical targets of the VTA dopamine neurons, such as the nucleus accumbens [60], hippocampus [61], and medial prefrontal cortex [62], as well as in the primary striatal target of the SN dopamine neurons, the caudate-putamen [63, 64]. In most brain regions studied, mammalian LTP and LTD are each known to be dependent on dopaminergic input. Hippocampal LTP is blocked by dopamine D1 antagonists [6567] and potentiated by D1 agonists [68]. Hippocampal LTD is potentiated by D1 agonists and D2 antagonists and blocked by D1 antagonists and D2 agonists [69]. Striatal LTD is blocked by either D1- or D2-type antagonists and can be restored in dopamine-depleted slice preparations by a combination of D1 and D2 agonists [70]. Striatal LTP is also dopamine-dependent [71], but in this case D1 and D2 receptor activations are antagonistic [72, 73]. LTP and LTD are also dopamine-dependent in amygdala [74], frontal cortex [62], and even the VTA [75, 76] and SN [77]. Perhaps surprisingly, while LTP and LTD have each been demonstrated at glutamatergic synapses in nucleus accumbens, dopamine appears to play an important role only in LTD in this structure [75, 78]. Thus dopamine appears to play important roles in the reinforcement of synaptic transmission in several but perhaps not all projections of the mesencephalic dopamine system. That the details of the roles played differ somewhat from site to site should perhaps not be surprising in light of the fact that, while the same intracellular events appear to be critical for memory consolidation in different structures, they do not necessarily occur in the same sequence in different structures [79].

A particularly interesting demonstration of striatal LTP involved the use of rewarding electrical stimulation at the level of mesencephalic dopamine neurons. Here rats were trained to lever-press for rewarding brain stimulation (confirming that the electrodes were in the right place), and then, after extinction of the response habit, stimulation with the same parameters was shown to potentiate subsequent corticostriatal glutamatergic neurotransmission. In this case the rewarding electrical stimulation sites were in the SN, and the effectiveness of the stimulation to potentiate corticostriatal transmission was correlated with the effectiveness of the stimulation to serve as a reward in the independent intracranial self-stimulation task [80]. Here a reinforcer involving the nigrostriatal system and established in the behavioral domain was found to influence memory consolidation in the cellular domain, and the memory that was consolidated was the memory of GABAergic medium spiny neurons for glutamatergic cortical input signals.

Summary and conclusions

While the present suggestion is that nigrostriatal and mesolimbic dopamine systems play common roles in reward function, this is not to imply that they are specialized for reward function or that they play a necessary role in reward. Dopamine-deficient mice are still capable of rudimentary learning. Forebrain dopamine release is caused not only by reward stimuli but also by stress and aversive stimuli. Near-total depletions of the forebrain dopamine cause aphagia, adipsia, and akinesia; thus basal dopamine levels play an essential role in all behavior and particularly in sensorimotor responsiveness to environmental stimuli. In addition to rewarding effects, phasic increases in dopamine have drive-like motivational effects that enhance of the salience of reward-predictive “incentive-motivational” stimuli in the environment, leading to reward-seeking behaviors. Conversely, competitive dopamine antagonists not only attenuate the effectiveness of rewards as reinforcers; they also attenuate an animal’s motivation or apparent willingness to exert effort. Even the importance of dopamine for memory consolidation appears not to depend on a single mechanism. LTP and LTD depend closely on the timing of synaptic inputs and thus probably depend on burst firing in dopamine neurons and actions of dopamine within the synapse [81]; post-trial memory consolidation in behavioral studies occurs over tens of minutes or hours and thus probably depends on extrasynaptic dopamine, perhaps more influenced by pacemaker firing of dopamine neurons and release from extra-synaptic varicosities along terminal axons [82]. Actions of dopamine at D1 and D2 receptors have different roles in LTP and LTD, and these roles can differ from structure to structure [83].

Thus dopamine participates in more than just reward or reinforcement function, and within reward and reinforcement function it contributes in multiple ways. However, it is not just mesocorticolimbic dopamine that is important for reward function; nigrostriatal dopamine is similarly important. Nigrostriatal and mesolimbic dopamine do not play equal or identical roles; they appear to contribute differentially to the learning of, for example, habit, spatial, and emotional learning [84]. In part, this must be due to differences in the information received in the different dopaminergic terminal fields; even within the striatum, different sub-regions receive inputs from different parts of the cerebral cortex [4]. While the number and nature of the roles dopamine plays remain to be fully understood, the habit-forming and habit-maintaining effects of intravenous cocaine—the properties that lead the animal from experimental cocaine self-administration to compulsive cocaine self-administration and addiction—appear to depend on both mesolimbic and nigrostriatal dopamine function.

Box 1Reward versus Reinforcement

What is reward and how does it differ from reinforcement? The term reward has many meanings, and they differ from writer to writer [51, 85, 86]. Reward is the more general term, referring as a noun to the stimulus or event that is rewarding, and referring as a verb to the consequence of presenting a rewarding stimulus in the right temporal relation with a response or another stimulus. When present in the environment or when presented prior to a given behavior, rewards also serve as incentives, attracting attention and approach responses. They can also serve a “priming” function, as when the eating of a salted peanut triggers the motivation for more salted peanuts. Priming effects of a reward are short-lived and not stored in long-term memory, but they can influence the probability and vigor of the next response in a series when animals are responding rapidly or when they are responding for slowly decaying rewards like addictive drugs [87, 88]. That rewards have incentive and priming effects as well as reinforcing (see below) effects make reward an ambiguous term when taken out of context. The term reward is used as a noun when we refer to the object or event. It is also used as a verb, both in common parlance and also by some specialists (such as brain stimulation reward specialists) who are sensitive to the potential confounding of priming effects with reinforcing effects under conditions where they cannot specify the relative contributions of the two [13, 51].

The term “reinforcement” is unambiguous. It refers only to the seemingly retroactive “stamping in” of recent associations between stimuli, stimuli and responses, and responses and outcomes that occurs when the reinforcer is given after the stimulus or response in question. Reinforcing actions of a reward depend on this temporal sequence. To reinforce the association between a conditioned and an unconditioned stimulus, the reinforcer (the unconditioned stimulus in this case) is most effective when given a fraction of a second after the conditioned stimulus so that it is stimulus-contingent; to reinforce a stimulus-response or a response-outcome association, the reinforcer (the reward) is also most effective when given very shortly after the response, so that it is response-contingent. Even a very short delay of reinforcement can reduce reinforcing effectiveness dramatically. The term “reinforcement,” then, is much more narrowly defined than the term “reward.” Often, unfortunately, the terms reward and reinforcement are used interchangeably, presumably by authors who are not concerned with, or who are careless with, these distinctions. For the purposes of the present paper, “reward” is used as a verb in situations where the presentation of a stimulus or event might have proactive “priming” effects as well as retroactive “reinforcing” effects. “Reinforcement” is used to refer selectively to the stamping in of learned associations with prior stimuli or responses.

Box 2Motivation

While it is not the primary topic of the present paper, motivation and reward are interdependent functions and dopamine is important for each [13]. Motivation is a state variable, an inferred variable that precedes, instigates, and invigorates goal-directed behavior [89]. The prototypical motivational states are thirst, hunger, and hormonal states such as those that orchestrate courtship behaviors. Motivation comes in two forms: drive and incentive motivation. Thirst and hunger are considered drive states, assumed to reflect the state of the organism and easily manipulated by water or food deprivation. Hormonal states are traditionally assumed to be drive states as well, but their status is ambiguous inasmuch as they can be triggered by environmental events [90]. Motivational states that are triggered by environmental stimuli are termed incentive-motivational states, a term that is usually used in the context of conditioned motivational or incentive stimuli. An incentive stimulus is a stimulus the animal tends to approach or work for; thus the term “incentive stimulus” is equivalent to the term “reward” when “reward” is used as a noun. A conditioned incentive motivational stimulus is a reward-predictor. Reward-predictive stimuli instigate and invigorate goal-directed behavior, and inasmuch reward-prediction depends on past learning, incentive motivation depends on past reinforcement. To the degree that past reinforcement depends on brain dopamine, incentive motivation is also dopamine-dependent. However, while dopamine is important for establishing incentive motivation (and while dopamine can enhance incentive motivation), incentive motivation (as reflected, for example in prolonged extinction responding) can survive for long periods without the need for any phasic dopamine contribution [13].

Box 3Brain dopamine and reward: caveats

It is widely accepted that brain dopamine plays a major role in reward function. However, there continue to be challenges to this widely held view, as discussed in detail elsewhere [13]. For those who do not follow this literature in detail, several caveats should be briefly mentioned. First, brain dopamine is not necessary for reward function; some degree of reward learning can be demonstrated in dopamine-deficient mice [91]. Second, brain dopamine is involved in more than merely the response to rewards. Dopamine systems are activated in response to aversive and arousing stimuli, as well as to stimuli we normally think of as rewarding (positive valence) stimuli [92]. Dopamine release can precipitate [93] or enhance [94] as well as reinforce responding, serving as a drive or priming stimulus in addition to its stamping in function. Third, while I once advanced the view that dopamine is important for the pleasure that is frequently associated with reinforcement, this is a side issue with respect to the theme of the present paper. Finally, consistent with the central issue of the present paper, while reward function is importantly dependent on dopamine function, it is not necessarily dependent on dopamine function in nucleus accumbens [13].


Supported by funding from the Intramural Research Program, National Institute on Drug Abuse, National Institutes of Health, Department of Health and Human Services. I thank Yavin Shaham, Stefanie Geisler, and Maria Flavia Barbano for constructive comments on an earlier draft.


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.


1. Dahlström A, Fuxe K. Evidence for the existence of monoamine-containing neurons in the central nervous system. Acta Physiologica Scandanavica. 1964;62:1–55. [PubMed]
2. Seiger A, Olson L. Late prenatal ontogeny of central monoamine neurons in the rat: Flourescence histochemical observations. Z Anat Entwickl-Gesch. 1973;140:281–318. [PubMed]
3. Ungerstedt U. Stereotaxic mapping of the monoamine pathways in the rat brain. Acta Physiol Scand Suppl. 1971;367:1–48. [PubMed]
4. Haber SN, et al. Striatonigrostriatal pathways in primates form an ascending spiral from the shell to the dorsolateral striatum. J Neurosci. 2000;20:2369–2382. [PubMed]
5. Everitt BJ, Robbins TW. Neural systems of reinforcement for drug addiction: from actions to habits to compulsion. Nat Neurosci. 2005;8:1481–1489. [PubMed]
6. Fallon JH, Loughlin SE. Substantia Nigra. In: Paxinos G, editor. The rat nervous system. 2. Academic Press; 1995. pp. 215–237.
7. Fallon JH. Topographic organization of ascending dopaminergic projections. Ann N Y Acad Sci. 1988;537:1–9. [PubMed]
8. Gerfen CR, et al. The neostriatal mosaic: II. Patch- and matrix-directed mesostriatal dopaminergic and non-dopaminergic systems. J Neurosci. 1987;7:3915–3934. [PubMed]
9. Wang HL, Morales M. The corticotropin releasing factor binding protein (CRF-BP) within the ventral tegmental area is expressed in a subset of dopaminergic neurons. J Comp Neurol. 2008;509:302–318. [PMC free article] [PubMed]
10. Swanson LW, Petrovich GD. What is the amygdala? Trends Neurosci. 1998;21:323–331. [PubMed]
11. Heimer L, et al. The basal ganglia. In: Paxinos G, editor. The rat nervous sytem. Academic Press; 1985. pp. 37–74.
12. Wise RA, Rompré PP. Brain dopamine and reward. Ann Rev Psychol. 1989;40:191–225. [PubMed]
13. Wise RA. Dopamine, learning and motivation. Nat Rev Neurosci. 2004;5:483–494. [PubMed]
14. Gallistel CR, Karras D. Pimozide and amphetamine have opposing effects on the reward summation function. Pharmacol Biochem Behav. 1984;20:73–77. [PubMed]
15. Yeomans JS. The cells and axons mediating medial forebrain bundle reward. In: Hoebel BG, Novin D, editors. The Neural Basis of Feeding and Reward. Haer Institute; 1982. pp. 405–417.
16. Routtenberg A, Malsbury C. Brainstem pathways of reward. J Comp Physiol Psychol. 1969;68:22–30. [PubMed]
17. Crow TJ. A map of the rat mesencephalon for electrical self-stimulation. Brain Res. 1972;36:265–273. [PubMed]
18. Corbett D, Wise RA. Intracranial self-stimulation in relation to the ascending dopaminergic systems of the midbrain: a moveable electrode mapping study. Brain Res. 1980;185:1–15. [PubMed]
19. Wise RA. Intracranial self-stimulation: mapping against the lateral boundaries of the dopaminergic cells of the substantia nigra. Brain Res. 1981;213:190–194. [PubMed]
20. Prado-Alcala R, et al. Brain stimulation reward and dopamine terminal fields. II. Septal and cortical projections. Brain Res. 1984;301:209–219. [PubMed]
21. Prado-Alcala R, Wise RA. Brain stimulation reward and dopamine terminal fields. I. Caudate- putamen, nucleus accumbens and amygdala. Brain Res. 1984;297:265–273. [PubMed]
22. Glick SD, et al. Changes in morphine self-administration and morphine dependence after lesions of the caudate nucleus in rats. Psychopharmacologia. 1975;41:219–224. [PubMed]
23. Suto N, et al. Elecrolytic lesions of the dorsal, central, and ventral striatum differentially affect the maintenance of cocaine and morphine self-administration. Society for Neuroscience Abstracts, 2004. 2004;576(7):691–695.
24. Roberts DC, et al. Self-administration of cocaine on a progressive ratio schedule in rats: dose-response relationship and effect of haloperidol pretreatment. Psychopharmacology (Berl) 1989;97:535–538. [PubMed]
25. Maldonado R, et al. D1 dopamine receptors in the nucleus accumbens modulate cocaine self-administration in the rat. Pharmacol Biochem Behav. 1993;45:239–242. [PubMed]
26. McGregor A, Roberts DC. Dopaminergic antagonism within the nucleus accumbens or the amygdala produces differential effects on intravenous cocaine self-administration under fixed and progressive ratio schedules of reinforcement. Brain Res. 1993;624:245–252. [PubMed]
27. Caine SB, et al. Effects of the dopamine D-1 antagonist SCH 23390 microinjected into the accumbens, amygdala or striatum on cocaine self-administration in the rat. Brain Res. 1995;692:47–56. [PubMed]
28. Vanderschuren LJ, et al. Involvement of the dorsal striatum in cue-controlled cocaine seeking. J Neurosci. 2005;25:8665–8670. [PubMed]
29. Di Ciano P, Everitt BJ. Direct interactions between the basolateral amygdala and nucleus accumbens core underlie cocaine-seeking behavior by rats. J Neurosci. 2004;24:7167–7173. [PubMed]
30. Ranaldi R, Wise RA. Blockade of D1 dopamine receptors in the ventral tegmental area decreases cocaine reward: Possible role for dendritically released dopamine. J Neurosci. 2001;21:5841–5846. [PubMed]
31. Kalivas PW, Duffy P. D1 receptors modulate glutamate transmission in the ventral tegmental area. J Neurosci. 1995;15:5379–5388. [PubMed]
32. You ZB, et al. A role for conditioned ventral tegmental glutamate release in cocaine-seeking. J Neurosci. 2007;27:10546–10555. [PubMed]
33. Legault M, Wise RA. Novelty-evoked elevations of nucleus accumbens dopamine: dependence on impulse flow from the ventral subiculum and glutamatergic neurotransmission in the ventral tegmental area. Eur J Neurosci. 2001;13:819–828. [PubMed]
34. Legault M, et al. Chemical stimulation of the ventral hippocmpus elevates nucleus accumbens dopamine by activating dopaminergic neurons of the ventral tegmental area. J Neurosci. 2000;20:1635–1642. [PubMed]
35. Legault M, Wise RA. Injections of N-methyl-D-aspartate into the ventral hippocampus increase extracellular dopamine in the ventral tegmental area and nucleus accumbens. Synapse. 1999;31:241–249. [PubMed]
36. Quinlan MG, et al. Blockade of substantia nigra dopamine D1 receptors reduces intravenous cocaine reward in rats. Psychopharmacology. 2004;175:53–59. [PubMed]
37. Omelchenko N, Sesack SR. Cholinergic axons in the rat ventral tegmental area synapse preferentially onto mesoaccumbens dopamine neurons. J Comp Neurol. 2006;494:863–875. [PMC free article] [PubMed]
38. You ZB, et al. Acetylcholine release in the mesocorticolimbic dopamine system during cocaine-seeking: Conditioned and unconditioned contributions to reward and motivation. J Neurosci. 2008;28:9021–9029. [PMC free article] [PubMed]
39. Schultz W. Predictive reward signal of dopamine neurons. J Neurophysiol. 1998;80:1–27. [PubMed]
40. Gerfen CR, et al. The neostriatal mosaic: compartmental distribution of calcium-binding protein and parvalbumin in the basal ganglia of the rat and monkey. Proc Natl Acad Sci U S A. 1985;82:8780–8784. [PubMed]
41. Grace AA, Onn SP. Morphology and electrophysiological properties of immunocytochemically identified rat dopamine neurons recorded in vitro. J Neurosci. 1989;9:3463–3481. [PubMed]
42. Liang CL, et al. Midbrain dopaminergic neurons in the mouse that contain calbindin-D28k exhibit reduced vulnerability to MPTP-induced neurodegeneration. Neurodegeneration. 1996;5:313–318. [PubMed]
43. Brown MT, et al. Activity of neurochemically heterogeneous dopaminergic neurons in the substantia nigra during spontaneous and driven changes in brain state. J Neurosci. 2009;29:2915–2925. [PMC free article] [PubMed]
44. Mantz J, et al. Effect of noxious tail pinch on the discharge rate of mesocortical and mesolimbic dopamine neurons: selective activation of the mesocortical system. Brain Res. 1989;476:377–381. [PubMed]
45. Margolis EB, et al. The ventral tegmental area revisited: is there an electrophysiological marker for dopaminergic neurons? J Physiol. 2006;577:907–924. [PubMed]
46. Lammel S, et al. Unique properties of mesoprefrontal neurons within a dual mesocorticolimbic dopamine system. Neuron. 2008;57:760–773. [PubMed]
47. Apicella P, et al. Responses to reward in monkey dorsal and ventral striatum. Exp Brain Res. 1991;85:491–500. [PubMed]
48. Apicella P, et al. Neuronal activity in monkey striatum related to the expectation of predictable environmental events. J Neurophysiol. 1992;68:945–960. [PubMed]
49. Wise RA. Drive, incentive, and reinforcement: the antecedents and consequences of motivation. Nebraska Symposium on Motivation. 2004;50:159–195. [PubMed]
50. Landauer TK. Reinforcement as consolidation. Psychol Rev. 1969;76:82–96. [PubMed]
51. Wise RA. The brain and reward. In: Liebman JM, Cooper SJ, editors. The Neuropharmacological Basis of Reward. Oxford University Press; 1989. pp. 377–424.
52. Krivanek JA, McGaugh JL. Facilitating effects of pre- and posttrial amphetamine administration on discrimination learning in mice. Agents Actions. 1969;1:36–42. [PubMed]
53. Messier C, White NM. Contingent and non-contingent actions of sucrose and saccharin reinforcers: Effects on taste preference and memory. Physiology & Behavior. 1984;32:195–203. [PubMed]
54. Huston JP, et al. Facilitation of learning by reward of post-trial memory processes. Experientia. 1974;30:1038–1040.
55. Mondadori C, et al. Post-trial reinforcing hypothalamic stimulation can facilitate avoidance learning. Neuroscience Letters. 1976;2:183–187. [PubMed]
56. Major R, White N. Memory facilitation by self-stimulation reinforcement mediated by nigrostriatal bundle. Physiology & Behavior. 1978;20:723–733. [PubMed]
57. Packard MG, White NM. Dissociation of hippocampus and caudate nucleus memory systems by posttraining intracerebral injection of dopamine agonists. Behav Neurosci. 1991;105:295–306. [PubMed]
58. Packard MG, et al. Amygdala modulation of hippocampal-dependent and caudate nucleus-dependent memory processes. Proc Natl Acad Sci U S A. 1994;91:8477–8481. [PubMed]
59. Dalley JW, et al. Time-limited modulation of appetitive Pavlovian memory by D1 and NMDA receptors in the nucleus accumbens. Proc Natl Acad Sci U S A. 2005;102:6189–6194. [PubMed]
60. Kombian SB, Malenka RC. Simultaneous LTP of non-NMDA-and LTD of NMDA-receptor-mediated responses in the nucleus accumbens. Nature. 1994;368:242–246. [PubMed]
61. Bliss TVP, Lomo T. Long-lasting potentiation of synaptic transmission in the dentate area of the anesthetized rabbit following stimulation of the perforant path. J Physiol. 1973;232:331–356. [PubMed]
62. Otani S, et al. Dopaminergic modulation of long-term synaptic plasticity in rat prefrontal neurons. Cereb Cortex. 2003;13:1251–1256. [PubMed]
63. Calabresi P, et al. Long-term potentiation in the striatum is unmasked by removing the voltage-dependent magnesium block of NMDA receptor channels. Eur J Neurosci. 1992;4:929–935. [PubMed]
64. Centonze D, et al. Dopaminergic control of synaptic plasticity in the dorsal striatum. Eur J Neurosci. 2001;13:1071–1077. [PubMed]
65. Frey U, et al. Dopaminergic antagonists prevent long-term maintenance of posttetanic LTP in the CA1 region of rat hippocampal slices. Brain Res. 1990;522:69–75. [PubMed]
66. Frey U, et al. The effect of dopaminergic D1 receptor blockade during tetanization on the expression of long-term potentiation in the rat CA1 region in vitro. Neurosci Lett. 1991;129:111–114. [PubMed]
67. Li S, et al. Dopamine-dependent facilitation of LTP induction in hippocampal CA1 by exposure to spatial novelty. Nat Neurosci. 2003;6:526–531. [PubMed]
68. Otmakhova NA, Lisman JE. D1/D5 dopamine receptors inhibit depotentiation at CA1 synapses via cAMP-dependent mechanism. J Neurosci. 1998;18:1270–1279. [PubMed]
69. Chen Z, et al. Roles of dopamine receptors in long-term depression: enhancement via D1 receptors and inhibition via D2 receptors. Receptors Channels. 1996;4:1–8. [PubMed]
70. Calabresi P, et al. Long-term synaptic depression in the striatum: physiological and pharmacological characterization. J Neurosci. 1992;12:4224–4233. [PubMed]
71. Centonze D, et al. Unilateral dopamine denervation blocks corticostriatal LTP. J Neurophysiol. 1999;82:3575–3579. [PubMed]
72. Calabresi P, et al. Abnormal synaptic plasticity in the striatum of mice lacking dopamine D2 receptors. J Neurosci. 1997;17:4536–4544. [PubMed]
73. Calabresi P, et al. Dopamine and cAMP-regulated phosphoprotein 32 kDa controls both striatal long-term depression and long-term potentiation, opposing forms of synaptic plasticity. J Neurosci. 2000;20:8443–8451. [PubMed]
74. Bissiere S, et al. Dopamine gates LTP induction in lateral amygdala by suppressing feedforward inhibition. Nat Neurosci. 2003;6:587–592. [PubMed]
75. Thomas MJ, et al. Modulation of long-term depression by dopamine in the mesolimbic system. J Neurosci. 2000;20:5581–5586. [PubMed]
76. Bonci A, Malenka RC. Properties and plasticity of excitatory synapses on dopaminergic and GABAergic cells in the ventral tegmental area. J Neurosci. 1999;19:3723–3730. [PubMed]
77. Overton PG, et al. Long-term potentiation at excitatory amino acid synapses on midbrain dopamine neurons. Neuroreport. 1999;10:221–226. [PubMed]
78. Pennartz CM, et al. Synaptic plasticity in an in vitro slice preparation of the rat nucleus accumbens. Eur J Neurosci. 1993;5:107–117. [PubMed]
79. Izquierdo I, et al. Different molecular cascades in different sites of the brain control memory consolidation. Trends in Neurosciences. 2006;29:496–505. [PubMed]
80. Reynolds JN, et al. A cellular mechanism of reward-related learning. Nature. 2001;413:67–70. [PubMed]
81. Wickens JR. Synaptic plasticity in the basal ganglia. Behav Brain Res. 2009;199:119–128. [PubMed]
82. Descarries L, et al. Dual character, asynaptic and synaptic, of the dopamine innervation in adult rat neostriatum: a quantitative autoradiographic and immunocytochemical analysis. J Comp Neurol. 1996;375:167–186. [PubMed]
83. Nicola SM, et al. Dopaminergic modulation of neuronal excitability in the striatum and nucleus accumbens. Annu Rev Neurosci. 2000;23:185–215. [PubMed]
84. McDonald RJ, White NM. A triple dissociation of memory systems: hippocampus, amygdala, and dorsal striatum. Behav Neurosci. 1993;107:3–22. [PubMed]
85. White NM. Reward or reinforcement: what’s the difference? Neurosci Biobehav Rev. 1989;13:181–186. [PubMed]
86. Berridge KC, Robinson TE. Parsing reward. Trends in Neurosciences. 2003;26:507–513. [PubMed]
87. Gallistel CR, et al. Parametric analysis of brain stimulation reward in the rat: I. The transient process and the memory-containing process. J Comp Physiol Psychol. 1974;87:848–859. [PubMed]
88. Pickens R, Harris WC. Self-administration of d-amphetamine by rats. Psychopharmacologia. 1968;12:158–163. [PubMed]
89. Wise RA. Sensorimotor modulation and the variable action pattern (VAP): Toward a noncircular definition of drive and motivation. Psychobiology. 1987;15:7–20.
90. Lehrman DS. The reproductive behavior of ring doves. Scientific American. 1965;211:48–54. [PubMed]
91. Robinson S, et al. Distinguishing whether dopamine regulates liking, wanting, and/or learning about rewards. Behav Neurosci. 2005;119:5–15. [PubMed]
92. Horvitz JC. Mesolimbocortical and nigrostriatal dopamine responses to salient non-rewards. Neuroscieince. 2000;96:651–656. [PubMed]
93. Phillips PE, et al. Subsecond dopamine release promotes cocaine seeking. Nature. 2003;422:614–618. [PubMed]
94. Wyvell CL, Berridge KC. Intra-accumbens amphetamine increases the conditioned incentive salience of sucrose reward: enhancement of reward “wanting” without enhanced “liking” or response reinforcement. J Neurosci. 2000;20:8122–8130. [PubMed]