Search tips
Search criteria 


Logo of ploscompComputational BiologyView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
PLoS Comput Biol. 2016 June; 12(6): e1005003.
Published online 2016 June 24. doi:  10.1371/journal.pcbi.1005003
PMCID: PMC4920376

Prospective Coding by Spiking Neurons

Peter E. Latham, Editor


Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron’s firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ).

Author Summary

Sensory inputs are often predictable. Lightning is followed by thunder, a falling object causes noise when hitting the ground, our skin gets wet when we jump into the water. Humans learn regularities like these without effort. Learned predictions allow to cover the ears in anticipation of thunder or close the eyes just before an object hits the ground and breaks into pieces. What changes in the brain when new predictions are learned? In this article, we present a mathematical model and computer simulations of the idea that the activity of a single neuron represents expected future events. Such a prospective coding can be learned in a neuron that receives input from the memory trace of a first event (e.g. lightning) and also input from the second event (e.g. thunder). Synaptic input connections from the memory trace are potentiated such that the spiking activity ramps up towards the onset of the second event. This deviates from the classical Hebbian learning that merely associates two events that are coincident in time. Learning in our model associates a current event to future events.


Animals can learn to predict upcoming stimuli. In delayed paired-associate tasks, animals learn to respond to pairs of stimuli (e.g. images A1-B1 and A2-B2) separated by a delay. These tasks can be solved by either keeping a memory of the first stimulus (A1 or A2) during the delay period (retrospective coding) or anticipating the second stimulus (B1 or B2) during the delay period (prospective coding). Monkeys seem to use both coding schemes [1]. Recordings in the prefrontal cortex of monkeys performing a delayed paired-associate task revealed single neurons with decreasing firing rate in response to a specific first stimulus (A1 or A2) and other neurons with ramping activity in trials where a specific second stimulus (B1 or B2) is anticipated [1, 2]. Thus, the firing rate of a neuron may encode not only past and current events, but also prospective events.

Learning to anticipate a future stimulus can also be observed in classical trace conditioning, where a conditioned stimulus (CS, e.g. sound of a bell) is followed after a delay by an unconditioned stimulus US (e.g. a sausage) that causes a response R (e.g. salivation) [3, 4]. After several repetitions of this protocol, the conditioned stimulus CS can elicit response R already before the onset of the unconditioned stimulus US.

A common experimental finding in these examples is the slowly ramping neuronal activity prior to the predicted event. In an experiment where mice choose to lick left or right in response to a tactile cue, the neural activity in the anterior lateral motor cortex ramps up in the waiting period before the response [5]. This activity pattern implements prospective coding as it indicates whether the animal will lick left or right. Serotonergic neurons in the dorsal raphe nucleus of mice show an activity ramp in a delay period between a predictive odor cue and the availability of a sucrose reward [6]. In rats that navigate a maze towards the learned position of a chocolate milk reward, the activity of striatal neurons increases while the rat approaches the reward position [7, 8]. In visual delayed paired associate tasks in which monkeys are trained to select a specific choice object that is associated with a previously shown cue object, increasing activity in the delay period was measured for neurons in the prefrontal cortex [1, 9, 10] and in the inferior temporal cortex [2, 11].

It is unclear how prospective coding emerges. The cue and the associated predictable event are typically separated by an interval of some seconds. On the other hand, synaptic plasticity, that is presumably involved in learning new associations, typically requires presynaptic and postsynaptic activity to coincide in a much shorter interval. Some tens of milliseconds is, for example, the size of the ‘plasticity window’ in spike-timing dependent plasticity; no synaptic change occurs, if presynaptic and postsynaptic spike are separated by more than the size of this plasticity window [12, 13]. This mismatch between the behavioral and the neuronal timescales begs the question how a neuronal system can learn to make predictions more than a second ahead. There are also plasticity mechanisms that can correlate pre- and postsynaptic spiking events that are separated by seconds [14, 15]. Yet, assuming many simultaneously active afferents, it remains unclear how the behaviourally relevant pair of pre- and postsynaptic spikes can be selected out of hundreds behaviourally irrelevant pairs.

In normative models of synaptic plasticity, the shape of the causal part of the plasticity window matches the shape of the postsynaptic potential (PSP), if the objective is to reproduce precise spike timings [1618]. However, if the objective is to reproduce future activity, this specific learning rule is insufficient. Yet, as we demonstrate in this article, the same plasticity rule with only a slightly wider window also allows for learning a prospective code. With this mechanism, it is possible to learn an activity ramp towards a specific event in time, or to learn predicting a time-varying signal or a sequence of activities well ahead in time. In a 2-compartment neuron model, this mechanism leads to the dendritic prediction of future somatic spiking. The mechanism stands in contrast to the work of Urbanczik & Senn, where the current somatic spiking is predicted [18]. Despite this fundamental difference, the plasticity rules only differ in the width of the potentiation part of the plasticity window.


Schematic description of the learning mechanism

Before defining the learning rule in detail, we provide an intuitive description. In a neuron with both static synapses (green connection in Fig 1A and 1B) and plastic synapses (blue in Fig 1A and 1B), we propose a learning mechanism for the plastic synapses that relies on two basic ingredients: spike-timing dependent synaptic potentiation and balancing synaptic depression. The synaptic connections are strengthened if a presynaptic spike is followed by a postsynaptic spike within a ‘plasticity window of potentiation’ (red in Fig 1A and 1B). The size of this plasticity window turns out to have a strong influence on the timing of spikes that are caused by strengthened dendritic synapses. If the plasticity window has the same shape as a postsynaptic potential (PSP), learned spikes are fired at roughly the same time as target spikes [1618]. But if the plasticity window is slightly longer than the postsynaptic potential, learned spikes tend to be fired earlier than target spikes. More precisely, because of the slightly wider plasticity window of potentiation, presynaptic spikes may elicit postsynaptic spikes through newly strengthened connections (thick blue arrow in Fig 1B) even before the onset of the input through static synapses. These earlier postsynaptic spikes allow to strengthen the input of presynaptic neurons that spike even earlier. We refer to this as the bootstrapping effect of predicting the own predictions. As a result, a postsynaptic activity induced by the input through static synapses will be preceded by an activity ramp produced by appropriately tuned dendritic input. The neuron learns a prospective code that predicts an upcoming event.

Fig 1
A learning mechanism that leads to prospective coding.

The 2-compartment neuron model

We consider a 2-compartment neuron model that captures important functional details of spiking neurons and is well suited for analytical analysis [18]. In this model (Fig 1C), a dendritic compartment receives input through plastic synapses with strength w. The voltage U of the somatic compartment is coupled to the dendritic voltage Vw and receives additional input I through static synapses,


where gL is the leak conductance, gD is the coupling conductance between soma and dendrite and C, the somatic capacitance. The dendritic potential Vw is given by a weighted sum of presynaptic inputs, i.e.


with plastic synaptic weights wi, postsynaptic potentials PSPi that model the depolarization of the postsynaptic membrane potential due to the arrival of a presynaptic spikes at synapse i, set 𝒯i of spike arrival times at synapse i and spike response kernel κ. Spiking of the postsynaptic neuron is modeled as an inhomogeneous Poisson process with rate [var phi](U).

We model the input with time varying excitatory and inhibitory conductances gE and gI proximal to the soma such that

I(t) = gE(t)(EEU) + gI(t)(EIU)

as proposed by Urbanczik & Senn [18].

For large total conductance and slowly varying input, the somatic membrane potential U(t) is well approximated (see Methods) by its steady state solution


where we introduced the attenuated dendritic potential


the attenuated somatic input


and the ‘nudging’ factor


with gtot(t) = gL + gD + gE(t) + gI(t), to be in accordance with Urbanczik & Senn [18]. The nudging factor λ(t) [set membership] (0, 1] is close to 1 for small somatic input and equal to 1 if gE(t) + gI(t) = 0.

Learning as dendritic prediction of the neuron’s future discounted firing rate

The plasticity rule we consider for the dendritic synapses can be seen as differential Hebbian in the sense that both the potentiation and depression term are a product of a post- and presynaptic term. The strength of synapse i is assumed to change continuously according to the dynamics




is the low-pass filtered postsynaptic potential at synapse i, [var phi](U) and φ(Vw*) are the instantaneous firing rates based on the somatic potential and the attenuated dendritic potential, respectively, and η is the learning rate. The factor of potentiation α that scales the potentiation term is positive but smaller than the inverse of the largest nudging factor 1/maxt λ(t) to prevent the unbounded growth of synaptic strengths.

Under the assumption of a periodic environment, rich dendritic input dynamics, constant nudging factor λ and linear [var phi] (Methods), the weight dynamics in Eq 8 leads to prospective coding by making the dendritic rate φ(Vw*(t)) approach the expected future discounted somatic input rate, i.e.


where the effective discount time constant τeff is given by


Depending on the factor of potentiation α and the nudging factor λ, the effective time constant τeff can be much larger than the biophysical time constant τ of low-pass filtering and match behavioral timescales of seconds. In particular, if the somatic input is strong and hence λ close to 0 (close to ‘clamping’), the effective discount time constant is short, τeffτ. But when nudging is weak (λ close to 1), the synapses on the dendrite learn to predict their self-generated somatic firing rate and the effective discount time constant is extended up to τeffτ1-α. The case of weak nudging is also the case when the neuron’s somatic firing rate is roughly determined by the dendritic input, φ(U(t))φ(Vw*(t)), see Eq 4. In particular, if after learning the somatic input is transiently silenced, the neuron’s firing rate [var phi](U(t)), according to Eq 10, represents the discounted future rate of the somatic input U*(t) applied during the previous learning period, even if this was only slightly nudging the somatic potential U(t) itself.

Periodic inputs are unrealistic in a natural setting. But a similar result holds also in more general settings, where a neuron is occasionally exposed to correlated dendritic and somatic inputs. In this more general stochastic setting we derive the main result under the assumption that dendritic and somatic inputs depend on the state of a stationary latent Markov chain X0,X1, …. The dependence on a stationary latent Markov chain assures that the neuron is occasionally exposed to correlated dendritic and somatic inputs. The main result in this setting is (cf. Eq 48)


where γeff=e-δτeff is a large discount factor that leads to a similar discount behavior as in the time-continuous case, if t = .

It is important to note that in the stochastic case the dendritic rate is only informative about expected future somatic inputs. Metaphorically speaking, a neuron can learn to predict the expected win in a lottery, but obviously it cannot learn to predict single lottery draws.

The bootstrapping effect of predicting the own predictions

In the limit, τ → 0 we find that PSP˜=PSP and with α = 1 we recover the learning rule of Urbanczik & Senn [18]. This rule adapts the dendritic synapses such that the dendritic input matches the somatic input Fig 2B. On the other hand, the learning rule with a slightly larger potentiation window leads to dendritic input that ramps up long before the onset of somatic input Fig 2C.

Fig 2
Learning activity ramps on a behavioral timescale with a biologically plausible plasticity window.

By looking at Eqs 4 and 8 we can now obtain a better intuition for the bootstrapping effect of predicting the own predictions. If at the beginning of learning all synaptic weights wi are zero, the dendritic potential Vw is at rest (= 0) all the time and the somatic membrane potential U(t) follows the somatic input U*(t) (see Eq 4). In this case, the learning rule in Eq 8 contains only the potentiation term


In the example in Fig 2C, the somatic input U* and consequently [var phi](U*) is non-zero only after 1800 ms. Therefore, synapse i is potentiated only if a presynaptic spike arrives shortly before the onset of the somatic input. The next time a presynaptic spike arrives at synapse i, the somatic membrane potential is depolarized by the dendritic input already before the onset of the somatic input and the learning rule contains at this moment (e.g. at 1780 ms in Fig 2C) the terms


These terms would cancel each other in the case of Urbanczik & Senn [18] where α = λ = 1 and PSP˜i=PSPi. But if PSP˜i is the low-pass filtered version of the postsynaptic potential (as in Fig 2C) they do not cancel. Instead, synapses are potentiated, if a presynaptic spike arrives shortly before the somatic potential was depolarized due to dendritic input through already potentiated synapses. The consequence of this bootstrapping effect appears in Fig 2C in the gray curves. After 100 training sessions, the dendritic input starts to rise around 1200 ms, but synapses with earlier presynaptic spikes are not yet strengthened. With each further training session the dendritic input rises earlier.

The dendritic and the somatic inputs are deterministic periodic functions, in the example in Fig 2C. Therefore we can directly compare the simulation to the theoretical results of the previous section. For the interval without somatic input (0–1800 ms), where φ(U)=φ(Vw*), we find a good agreement (dashed red and thick black line in Fig 2C). Small differences are to be expected, because in the theoretical derivations a constant nudging factor λ is assumed and the steady-state solution of the somatic membrane potential dynamics is used (see Eq 4). The dendritic rate φ(Vw*) is only slightly below the somatic rate [var phi](U) in the interval with somatic input (1800–2000 ms), because the somatic input is small.

Dependence on the dendritic input structure

The input pattern in Fig 2A is a particularly simple example of a deterministic, periodic pattern with rich enough structure. Enough structure to learn a prospective code exists also in sufficiently many randomly generated (frozen) spike trains that are deterministically repeated, if there is always at least one presynaptic spike within the duration of a PSP and the probability of repeating a nearly identical presynaptic spike pattern is low (see Fig 3A). We did not systematically search for the minimal number of required dendritic synapses. But for the example in Fig 3A we found empirically that a few hundred synapses are necessary. If the presynaptic firing frequency is only 2 Hz, we found that 1000 presynaptic neurons are enough to learn the ramp in 100 trials, whenever the learning rate is larger than in the 20 Hz case. At the end of learning, the time course of the somatic potential matches the one of the previous example (black lines in Figs Figs2C2C and and3A).3A). But during learning, the time course of the somatic potential is different in the two examples (gray lines in Figs Figs2C2C and and3A).3A). This is a consequence of the influence of correlations in the dendritic input. For the frozen spike trains, the presynaptic auto-correlation 𝔼[PSPi(t)PSPi(ts)] ≠ 0 is non-vanishing for all s and i. This causes the average firing rate to increase early during learning (Fig 3A; gray lines in interval 0–1500 ms in contrast to gray lines in the same interval in Fig 2C).

Fig 3
Dependence of learning on the type of input.

In the examples given so far, the dendritic and the somatic inputs are deterministic, but deterministic repetitions of the exact same spike trains are unrealistic. In Fig 3B we consider the more realistic case of random spiking. In each trial, the spikes are sampled from an inhomogeneous Poisson process, with periodically repeating rates. The resulting activity ramp is noisier but in good agreement with the theoretical result. It is important that the rates of the Poisson process have sufficiently rich structure. In Fig 3D the firing rate of the Poisson process is kept constant for one third of the trial. In this case, the temporal structure is not sufficiently rich to learn a smooth ramp and a stepwise activity ramp is learned instead.

In Fig 3C, the target event occurs only with a 50% chance, i.e. the somatic input is given only in half the trials. This results in an activity ramp with smaller amplitude, which is consistent with the theoretical finding that the dendritic rate depends linearly on the average somatic input rate (see Eq 12).

Delayed paired-associate task

Prospective coding in neurons of the prefrontal cortex was observed in an experiment with monkeys performing a delayed paired-associate task [1]. In this experiment, monkeys learned to associate a visual sample to a visual target presented one second later. Our learning rule allows for learning a prospective code in such a task.

During training, sample A1 is always followed by target B1 after a delay of 1s, and sample A2 is followed by target B2 (Fig 4A). In the simulation we assume that the sample (first stimulus) leaves a memory trace in form of a spatio-temporal activity pattern that projects through dendritic synapses, while the target (second stimulus) drives somatic synapses (Fig 4B). In order to have sufficiently rich presynaptic activity (c.f. Fig 3B), the memory trace of the sample is modeled by an inhomogeneous Poisson process with sample dependent rate trajectories (Fig 4C), i.e. during the presentation of the first stimulus the rate trajectory of each neuron approaches a previously chosen template trajectory that depends on the sample (see Methods). These memory traces are inspired by liquid state machines (see Discussion). If a neuron receives strong somatic input only in the presence of a specific target (neurons 1 and 2 in Fig 4B), its firing rate ramps up exclusively in anticipation of this target (neurons 1 and 2 in Fig 4D). In contrast to such a ‘grandmother-cell coding’ (one neuron for one target), a set of neurons could encode the target in a distributed manner, where the target is identified by the overall activity pattern and single neurons respond differently to different target stimuli. Such a distributed code can be learned with neurons that receive somatic input of target-specific strengths (neuron 3 in Fig 4B; B1 stronger than B2). After learning, the amplitude of the activity ramp reflects this target specificity (neuron 3 in Fig 4D).

Fig 4
Prospective coding in a delayed paired-associate task.

Prospective coding of times series

In Figs Figs22 to to44 the somatic target input was silent most of the time and active only during a short interval. This simple time course of the somatic input is, however, not a requirement and learning also converges for more complex trajectories of somatic input. In general, a time varying input through (static) somatic synapses induces plasticity that advances the postsynaptic firing rate [var phi](U(t)) relative to the firing rate [var phi](U*(t)) determined by the somatic input alone. Fig 5A shows an example with an advancement of roughly 50ms that has been achieved with a shorter time window (~20 ms) for synaptic potentiation. As in Fig 3A, the dendritic input was a periodically repeated random spike train that could also be replaced by stochastic spiking with time dependent firing rates as in Fig 3B.

Fig 5
Prospective coding of time-varying input.

Since the learning rule converges to a point where the dendritic input is proportional to the future discounted somatic input (Eq 10), the advanced sequence (black in Fig 5A) is not simply a forward shifted version of the somatic input (green in Fig 5A). This becomes clearly apparent at the center of the figure, where the somatic input is symmetric around 1000 ms, but the advanced sequence is decaying, because the somatic input has a strong dip around 1100 ms. Despite this, the advancement can be characterized by the peak time of the correlation function between [var phi](U(t)) and [var phi](U*(t)) that, as the effective discount time constant τeff, diverges with increasing potentiation factor α (Fig 5B–5D).

Time series prediction is a fundamental operation of the brain that is, for instance, involved in motor planning. In our context, the activity time course that has to be reproduced may be provided by proprioceptive feedback from muscles as somatic input U* to neurons in the primary motor cortex [19]. This feedback can be weak, delayed and sparse. The dendritic input V*, in turn, may be conveyed by a higher visual area or a premotor planning area. This dendritic input learns to predict the discounted future firing rate caused by the somatic input, and hence learns to produce the muscle activity that feeds back again as a delayed proprioceptive signal.

Prospective coding in a recurrent neural network

Lastly, we consider a recurrently connected network of 200 neurons that receive external input only at the soma and no external input at the dendrites. The input at the dendrites is given by the output spikes of the network neurons, where we consider all-to-all connectivity (Fig 6A). In contrast to the examples in Figs Figs22 to to5,5, there is no external control to assure the richness of the dendritic input and there are no guarantees that learning converges in the sense of Eq 10. Still, we observe the interesting result that learning changes synaptic strengths to allow fast replay of slow experienced sequences.

Fig 6
Prospective coding in a recurrent neural network.

For sequentially and periodically repeated stimulations on a slow timescale (green shading in Fig 6), the recurrent dendritic connections between subsequently stimulated groups of neurons are strengthened. After 300 repetitions of the same sequence, a brief initial stimulation is sufficient to evoke an activity sequence that has the same ordering as the original sequence (Fig 6B after 2400 ms). However, the replay dynamics can be much faster than the dynamics of the stimulation. Replay depends on the internal dynamics of the network, notably the time constants of the PSP and the membrane time constant. Due to prospective coding, the sequence becomes advanced in time while repeatedly presenting the stimuli, and due to the recurrent connectivity the advanced sequence can be recalled with a brief stimulation of the first group of neurons (Fig 6B). Note that there is no need to explicitly distinguish between a training and recall session. Recall differs from training only in the somatic input, which consists of a brief activation of the first group of neurons during recall and slow, sequential activation during training. The learning rule is active all the time.

Relation to TD(λ)

The proposed learning mechanism of prospective coding is related to a well studied version of temporal difference (TD) learning. Using our notation for a stochastic and time discrete setting, the goal in TD learning is to estimate a value function


where 0 < γTD < 1 is a discount factor and the expectation is taken over the Markov chain X0,X1, …. We assume that this value function can be approximated by a linear function of the form


where [var phi] is linear. In TD(λ) with linear function approximation, the weights w evolve according to the learning rule [2022]


with learning rate η, eligibility trace PSP^t,i=s=0(λTDγTD)sPSPi(Xt-s), 0 ≤ λTD ≤ 1, and delta error


This delta error is zero on average if the approximation φ(Vwt*(x)) is equal to the value function 𝒱(x) in Eq 15. Furthermore, φ(Vwt*(x)) converges to 𝒱(x) under the learning rule of TD(λ) in Eq 17 [20]. The discrete time version of our learning rule (Eq 42), implemented in the 2-compartment model, converges to Eq 12 which is identical to the value function in Eq 15 if γTD = γeff. Therefore, this form of TD(λ) and our learning mechanism converge to the same value. It is also interesting to see that both methods use an eligibility trace PSP^ and PSP˜ that are the same if λTD γTD = γ, i.e. λTD = γ/γeff. But despite the convergence to the same point and the use of the same eligibility trace, learning moves in general along different trajectories under this form of TD(λ) and the learning mechanism we propose.

So far we compared the learning mechanism of prospective coding to the plain TD(λ) that has access to the PSP and U*. If only access to U = λV*+U* is available, it is also possible to combine TD(λ) with the bootstrapping effect of predicting the own predictions by implementing a variant of TD(λ) in the dendritic compartment of the 2-compartment model. If the delta error is defined as


one can show that the learning rule in Eq 8 is almost identical to the TD learning rule in Eq 17 with λTD = 1 (Methods). In this case, the weights during learning move along similar trajectories, irrespective of whether this form of TD(1) or our learning rule is used. If this form of TD(1) were not implemented in the 2-compartment model, i.e. if the first term in the delta error in Eq 19 would be replaced by [var phi](U*(Xt)), the time constant of future discounting would be γ instead of γeff. But since the first term in the delta error in Eq 19 depends on the full somatic potential U = λV* + U* the bootstrapping effect of predicting the own predictions applies and the large time constant γeff arises.


As a simple and biologically plausible explanation for how animals can learn to predict future events, we have proposed a local plasticity mechanism that leads to prospective coding in spiking neurons, i.e. the plastic synapses change such that the neuron’s current firing rate depends on its expected, future discounted firing rate.

Our model proposes a partial solution to the problem of learning associations on a behavioral timescale without using slow intrinsic processes. Even with a plasticity window that is only slightly larger than the duration of a postsynaptic potential, the effective time constant of discounting the expected future firing rate can be on the order of seconds, thanks to the bootstrapping effect of predicting the own predictions. This effect arises because already predictive inputs influence the activity of a neuron. This is captured by the 2-compartment model of Urbanczik & Senn [18], where the output depends on both the dendritic (student) and the somatic (target) input.

For clarity, we presented the model with target input through static (i.e. non-plastic) somatic synapses and in the examples of ramping activity in Figs Figs22 and and33 the somatic input was non-zero only during a short period. This simple form of the target input is not a requirement. First, the learning mechanism also applies to arbitrary time courses of the somatic input, as we show in the example of time series prediction in Fig 5, where an advanced and smoothed version of a complex somatic input is learned. Second, the somatic synapses do not need to be static. Yet, they should change slower than the dendritic synapses in order to get a separation of plasticity timescales. And third, the target input could also arrive at another dendritic branch instead of the soma (see generality of the results in Methods).

We focused solely on learning temporal associations and neglected important aspects of learning in animals. However, the proposed learning mechanism can easily be extended to include, for example, a weighting based on behavioral relevance. In the delayed paired-associate task, our model learns the associations between sample and target irrespective of the behavioral relevance of this association. In animal training, however, reward or punishment is crucial; for example the monkeys in the study of Rainer et al. [1] received juice rewards. The learning rate in our learning mechanism is a free parameter that could incorporate a weighting by behavioral relevance. Biophysically, a neuromodulator like dopamine could implement this modulation of the learning rate. It is also possible to postpone the weight update in Eq 1 and use reward modulated eligibility traces instead (see e.g. [2325] for theory and [15, 26] for experiments).

The proposed learning mechanism could also be involved in classical trace conditioning, where the first stimulus (CS) is separated from the second stimulus (US) and the response (R) by a delay period, similar to the situation in the delayed paired-associate task. Let us assume that neuron 1 in Fig 4 is involved in initiating response R (e.g. salivation). If the unconditioned stimulus causes somatic input to this neuron and a memory trace of the conditioned stimulus arrives at the dendritic synapses, our learning mechanism would lead to ramping activity and salivation prior to the onset of the unconditioned stimulus that originally triggered the salivation. To our knowledge, there is no conclusive experimental data to support or discard the hypothesis that prospective coding is involved in classical trace conditioning. In the cited studies on ramping activity [1, 2, 611], the animals were actively engaged in a task (operant conditioning). It is unlikely, however, that the ramping activity is merely a side-effect of movement preparation, since Rainer et al. [1] found it to be stimulus-specific but not action-specific.

In our model of delayed paired-associate tasks, activity ramps rely on temporally structured input from short-term memory neurons. The usage of these short-term memory neurons is motivated by the observation that hippocampal activity is needed to overcome the temporal discontiguity in trace conditioning [4, 27]. We modeled the dynamics of the recurrent short-term memory network with a stochastic process. The parameter choice of this stochastic process is inspired by the widespread experimental observation that stimulus onset quenches the neural variability [28, 29]. It should also be possible to model the memory traces with “dynamical attractors” in recurrent networks of rate neurons [30] or with long and stable transient dynamics in balanced networks [31]. Since these memory traces are not the main focus of this study we generated them in a simpler way with the stochastic process, which still feels more natural than the delay-line like traces used in a study on trace conditioning [32].

In recurrent neural networks the learning rule of prospective coding allows fast replay of slow input sequences (Fig 6). Fast replay could be valuable for planning, where it is important to quickly assess the likely successors of a given state. The same fast replay of a previously induced slower activity sequence was also observed in the rat primary visual cortex [33] and it is as well studied as compressed hippocampal replay of a spatial sequence [34]. In rats these replay events can be observed minutes or hours after the spatial experience. In contrast, the simple form of the plasticity rule in Eq 8 does not have any consolidation properties and ongoing pre- and postsynaptic activity would quickly change the learned weight patterns and thus overwrite the memories. It is, however, straightforward to extend the plasticity model by a consolidation mechanism. In the three state consolidation model of Ziegler et al. [35, 36], early long-term potentiation (LTP) is induced by a triplet rule [37]. Replacing the triplet rule by the plasticity rule in Eq 8 would endow the learning rule of prospective coding with a consolidation mechanism. Such a consolidation mechanism would allow to replay sequences a long time after the training session.

Aiming at a better understanding of biological implementations of prediction learning, our model allows to speculate about physiological realizations of the model variables. Similar to previously proposed plasticity rules [16, 18], our learning mechanism depends on the postsynaptic firing rate [var phi](U), a function of the dendritic potential φ(Vw*), the postsynaptic potential PSP and, as a new ingredient compared to previous propositions [16, 18]: a low-pass filtered version of the postsynaptic potential PSP˜. A plasticity window that is slightly larger than the duration of a postsynaptic potential is in agreement with experimentally measured plasticity window sizes [13, 38]. In particular, an increased level of dopamine was observed to expand the effective time window of potentiation to at least ~45 ms [38]. Importantly, even with a plasticity window on this timescale, predictions can be learned on a timescale of seconds due to the bootstrapping effect of predicting the own predictions.

We have shown that the proposed learning mechanism is closely related to temporal difference learning with eligibility traces TD(λ). As discussed in the previous paragraph, a local biological implementation of our learning rule seems straightforward. In contrast, it seems more challenging to locally implement the delta error of TD learning. Potjans et al. and Kolodziejski et al. propose a local implementation that depends either on differential Hebbian plasticity [39] or on two postsynaptic activity traces with different time constants to approximate the difference in the delta error [40]. Both methods require a gating mechanism that allows plasticity only shortly after the onset of a new state and they require transition intervals between states of fixed duration. Furthermore, “state neurons” are only highly active when the agent is in a certain state, which requires the segmentation of the sensory input stream into discrete states. The learning rule we propose does not require these strong assumptions.

Frémaux et al. [41] speculate about a non-local implementation of TD learning with spiking neurons, where the TD error is represented by the firing rate of dopaminergic neurons that receive input from three groups of neurons that encode reward, value function and derivative of the value function. In the simulations, however, Frémaux et al. did not use the proposed network implementation of the TD error and they mention that it remains to be seen whether such a circuit can effectively be used to compute a useful TD error. A non-local implementation of the TD error appears compelling in a actor-critic setting, since the actor and the critic can be learned with the same TD signal. However, if the task is to predict more than a scalar quantity like reward, it seems inefficient to use a non-local implementation of the TD error for each quantity to be predicted. Already in our simple example of prospective coding in a recurrent neural network, four TD error networks would be needed in such a non-local implementation.

Generally, associating temporally separated events requires some memory of the first event until the second event is present. Possible neural implementations of this memory rely on long spiking activity traces or on long synaptic eligibility traces. Our model of the delayed paired-associate task relies on long spiking activity traces. The short-term memory network can be seen as a liquid state machine [42] or echo state machine [43] and the ramping activity is learned as readout from this activity traces. Alternatively, the activity trace could be represented by slowly, exponentially decaying spiking activity after strong stimulation of a cell [44]. This proposition, however, fails to explain the experimentally observed activity ramps prior to predictable events [1, 2, 611]

The origin of the ramping activity observed in experiments is not yet fully understood. An alternative to our proposition can be found in recurrent neural network dynamics, where slowly ramping or decaying activity arises with appropriately tuned synaptic weights [2, 25]. In a reinforcement learning setting the time constant of the ramp can be learned by adjusting the recurrent weights with reward modulated Hebbian plasticity [45]. Data analysis of recordings in the macaque lateral intraparietal area revealed yet another candidate explanation: single neuron activity profiles could follow a step-like time course, while the averaged activity is a ramp, if the steps occur at different points in time [46].

Despite the formal link of our prospective coding algorithm to TD learning, the learning we consider is purely supervised on the level of the neuron. Yet, the same learning rule can also be used to explain conditioning experiments. Instead of the multiplicative modulation by a global reward signal, the reward signal could directly nudge the somatic compartment of the neurons and act as a teaching signal. But the learning rule would also allow for combining the somatic nudging signal with an additional modulatory factor, and nudging and modulatory signals could even be sparse and interleaved. For instance, the rule may explain the simultaneous shaping of predictive motor circuitries by sensory feedback and reward [5]. Fluctuating somatic inputs may cause behavioral variations and feedback signals may gate dendritic plasticity such that only rewarded fluctuations act as a target signal for prospective coding. It is also possible to adapt the somatic input connections directly with reinforcement learning, and a ramping activity could arise from learning a prospective code with stimulus-dependent dendritic input.

Since reward is an intrinsic component in animal training, we acquired an advanced knowledge about the neuronal bases of reward prediction. But predictions are not restricted to reward, and predicting the identity of stimuli yields more versatile information. We speculate that prospective coding is more abundant than previously thought and, as we showed, it could easily be implemented on the level of an individual neuron. This view is also consistent with the recently observed future-predicting encoding in the retina [47]. To this end, a potentiation window slightly larger than a PSP, together with the bootstrapping effect of predicting the own predictions, is a parsimonious mechanism for learning prospective codes by neurons. A characterisitics of these neurons is that their current firing rate matches their own expected future discounted rate.


Parameters of the neuron model

The spike response kernel κ in Eq 2 is given by

κ(t) = c H(t)(e-t/τme-t/τs) , 

with Heaviside function H(t) = 0 if t < 0 and H(t) = 1 otherwise, τm = 10 ms, τs = 10/3 ms and c-1=-dtH(t)(e-t/τm-e-t/τs).

We set the somatic capacitance C = 1 nF, the leak conductance gL = 100 nS, the coupling conductance gD = 1.8 μS, and the excitatory and inhibitory reversal potential EE = 14/3 and EI = -1/3, respectively. The description of the excitatory conductance gE(t) is given in the figure captions. The inhibitory nudging conductance gI(t) was equal to 0 except for simulations with PSP˜=PSP in Fig 2, where gI(t) = 4gE(t). The resting potential is 0 for both, the dendritic potential Vw and the somatic potential U. If one takes our unitless resting potential of 0 to correspond to -70 mV, and a potential of 1 to correspond to -55 mV, the above choices for EE and EI correspond to reversal potentials of 0 mV (excitation) and -75 mV (inhibition).

The instantaneous firing rate of the neuron is assumed to depend on the somatic membrane potential through function [var phi](U), which in the simulations has the form


with [var phi]max = 0.06 kHz. In simulations with spiking, the firing rate multiplied by the simulation time step serves as instantaneous rate of an inhomogeneous Bernoulli process.

Steady state solution of the somatic potential dynamics

For slowly enough changing Itot(t) and gtot(t), U(t) is well approximated by Itot(t)/gtot(t). To see this, we use the ansatz U(t) = Itot(t)/gtot(t) + ϵ(t) in Eq 1 and find


which leads to the conclusion that the error ϵ is small if |I˙totgtot+g˙totItotgtot2|1 during at least an interval of approximate duration Cgtot.

Under these assumptions we write


where we introduce the ‘nudging’ factor λ(t)=gL+gDgtot(t), the attenuated dendritic potential Vw*(t)=gDgL+gDVw(t), and the attenuated somatic input U*(t)=gE(t)EE+gI(t)EIgtot(t).

Generality of the results

Our main results are robust to variations of the model. For example, the target input I could be given by the input through static synapses on another dendritic branch instead of synapses at the soma, i.e. I(t) = gD(Vs(t) − U). In this case, the nudging factor becomes λ=gL+gDgL+2gD and is constant in time.

Modifying the depression term of the learning rule has an effect on the effective time scale τeff, but large effective time constants are achievable in any case. If the depression term in Eq 8 would be replaced by -φ(λVw*)PSPi, the effective time constant would be τeffτ/(1 − α), i.e. τeff would be independent of λ but still diverge when α → 1. Similarly, for a depression term given by −[var phi](Vw)PSPi, the effective time constant would be τeffτ/(1 − λ2 α), with λ2 = gD/gtot.

In the current writing of the learning rule, Eq 8, the postsynaptic term arises as instantaneous firing rate [var phi](U). But this rate could also be replaced by a postsynaptic sample spike train S(t) that averages out to this same rate, left angle bracketS(t)right angle bracket = [var phi](U(t)). Since learning becomes slower by this sampling, we run our simulations in the form of Eq 8.

Dynamics of short-term memory neurons in the delayed paired associate task

For each STM neuron i we first choose template rate trajectories ri1(t) for stimulus A1 and ri2(t) for stimulus A2 by sampling from a mean-zero Ornstein-Uhlenbeck process


where W is a Wiener process, 1/θ1 = 1000 ms, σ1 = 1 and s [set membership] {1, 2}. Actual rate trajectories ri(t) were sampled from a process with trial dependent mean and time dependent variance, i.e.

dri(t) =  - θ2(ri(t) - μs(t))dtσ(t)dW(t) , 

where 1/θ2 = 100 ms,


and σ2(t) = 1 if t < 0 s or t > 3 s, σ2(t) = 0.1 otherwise. This assures that in each trial the rate trajectories approach the template trajectories during the presentation of the sample. In between trials, the rate trajectories are independent of the template trajectories. Spike times are determined by sampling from an inhomogeneous Bernoulli process with rate [var phi](ri(t))Δt, where Δt is the simulation time step.

Simulation details

The differential equations were integrated with the Euler forward method with step size 0.1 ms. We choose the learning rate η = 0.5 in all simulations except for the simulation in Fig 2B and 2C where η = 50, since the presynaptic firing rate is low. All simulations are written in C. The plots are generated with Mathematica. The source code is publicly available at

Stationary point of learning for periodic environments

We assume a stationary environment and rich dendritic input dynamics, such that the dendritic inputs can potentially be predictive of the somatic input. There are different ways to model stationarity of the environment. One way is to restrict the inputs to depend on a stationary latent Markov chain. We consider this case in detail in the next section. Here, to present the main ideas in a mathematically simple form, we look at the artificial case, where stationarity enters through deterministic and periodic functions PSPi(t) and U*(t) with period T. Under this assumption, learning is at a stationary point when the changes of the weights in Eq 8 integrated over one period vanish, i.e.


Using the definition of PSP˜ in Eq 9 we find



where Eq 29 is obtained by changing the order of integration, changing the integration variable t to t + s and using -sT-sdtf(t)=0Tdtf(t), which holds for any T-periodic function f(t). The puzzling transition from an integral that depends on the past values of PSPi in Eq 28 to an integral that depends on the future values of U in Eq 29 is a result of the assumed stationarity of the environment, which here is expressed in the periodicity of the functions PSPi(t) and U*(t). Eq 29 holds for all synapses i, if


Strictly, Eq 30 follows from Eq 29 only if the inputs PSPi(t) span the space of square integrable, T-periodic functions. In actual implementations the number of synapses is limited, but we find empirically that Eq 30 holds approximately at the stationary point if, loosely speaking, the inputs PSPi(t) at individual synapses are sufficiently rich and different from each other.

The right-hand side of Eq 30 also depends on the dendritic potential Vw*, since the membrane potential U depends both on the dendritic input Vw* and the somatic input U* (see Eq 4). Assuming a linear transfer function [var phi], Eq 30 becomes


With a Fourier transform and assuming a constant nudging factor λ we can solve this equation for φ(Vw*(t)).

The Fourier coefficients h^k, k ∈ ℤ, of the T-periodic function h(t)=0dse-sτf(t+s) are given by




where, in the first line, we changed the order of integration, changed the variable t to ts and used the periodicity of the integrand to obtain -sT-sdte-i2πktTf(t)=0Tdte-i2πktTf(t). In the second line we introduced the Fourier coefficients f^k.

With f(t)=φ(Vw*(t)) and g(t) = [var phi](U*(t)) we rewrite Eq 31


and Fourier transform both sides to obtain


Solving for f^k leads to



This equation has the same structure as Eq 34. With the inverse Fourier transform and assuming αλ < 1 we find Eq 10, i.e.


where τeff=τ1-αλ.

Convergence of learning in stationary stochastic environments

We formalize the notion of a stationary environment by introducing a stationary latent Markov chain and restricting the dendritic input PSPi(t) = PSPi(Xt) and the somatic input U*(t) = U*(Xt) to depend on the state Xt of the Markov chain. An alternative way to formalize the notion of stationarity would be to define stationary dynamics of the dendritic inputs and define the correlation between dendritic and somatic input. As it is always possible to reformulate the stationary dendritic input dynamics and the correlation between dendritic and somatic input in terms of a stationary latent Markov chain—with potentially large state space—we stick to the description with a latent Markov chain.

Formally, for time t ∈ ℤ, states Xt in a finite set 𝒳 = (s1s2, …, sN) evolve according to a stationary, irreducible Markov chain with transition probabilities T(si,sj) = Pr(Xt+1 = sj|Xt = si) and stationary distribution π(si) = Pr(Xt = si).

Note that the case of deterministic periodic input is readily formulated in terms of a stationary latent Markov chain that cycles deterministically through the state space, e.g. T(si, sj) = 1 if j = i + 1 or j = 1 and i = N and T(si, sj) = 0 otherwise. Functions that depend only on the state of the Markov chain are thus cyclic with period N, e.g. PSPi(Xt) = PSPi(Xt+N).

In order to switch to matrix notation in the rest of this section, we introduce the following terms:

  • Discounting operator A=αt=0γtTt, with transition matrix T and discount factor γ [set membership] [0, 1).
  • Postsynaptic potentials bi = (PSPi(s1), …,PSPi(sN))′ for each synapse i.
  • Matrix of postsynaptic potentials B = [b1 b2(...)bS], where S is the number of synapses.
  • Postsynaptic firing rates rU = ([var phi](U(s1)), …, [var phi](U(sN)))′.
  • Dendritic rates rV=(φ(Vw*(s1)),,φ(Vw*(sN))).
  • Somatic input rates rI = ([var phi](U*(s1)), …, [var phi](U*(sN)))′.
  • Expected future discounted firing rate F(x)=(ArU)(x)=αt=0γtE[φ(U(Xt))|X0=x].
  • Expected low-pass filtered postsynaptic potential PSP˜i(x)=t=0γtE[PSP(X-t)|X0=x].

In the following we sketch the proof for the equivalents of Eqs 30, 12 and 11 in the Markov chain setting. We will make use of the following basic facts about conditional expectations:



where t > 0, Tt denotes the matrix power of T, Π = diag(π(s1),π(s2), …, π(sN)) is the diagonal “stationary distribution matrix”, column vector f = (f(s1),f(s2), …, f(sN))′ and row vector f′, the transposed of f.

1. At the fixed point of learning the dendritic rate is proportional to the expected future discounted firing rate (cf. Eq 30)

In discrete time the learning rule in Eq 8 becomes


with PSP˜t,i=s=0γsPSPi(Xt-s). While Δwt,i is a stochastic variable in general, we will discuss in the following only the corresponding ordinary differential equation (ODE) of the mean


where PSP˜i(x)=t=0γtE[PSP(X-t)|X0=x] is the expected low-pass filtered postsynaptic potential. This ODE has the same fixed point and convergence behavior as the dynamics in Eq 42 under mild assumptions [48].

As in Eqs 28 and 29, we are going to show now that we can rewrite the dynamics of the mean synaptic weight in terms of the future discounted firing rate F(x) instead of the expected low-pass filtered postsynaptic potential PSP˜i(x), i.e.


This result is a consequence of the assumed stationarity of the Markov chain. It can be found by focusing on the potentiation term in the learning rule in Eq 43 and using Eq 41 and the notation introduced in the last paragraph, in particular,


which leads to


Using this equality in Eq 43 leads to Eq 44.

Assuming a trivial kernel for BΠ, i.e. BΠx = 0 [left and right double arrow ] x = 0, we find by looking at Eq 44 that


which is analogous to the statement in Eq 30. The assumption of a trivial kernel of BΠ implies that the map PSP(si) from the state space of the latent Markov chain to dendritic inputs is one-to-one. This assumption is analogous to the statement that the dendritic inputs PSPi(t) at individual synapses are sufficiently rich and different from each other (see Results after Eq 30).

2. At the fixed point of learning the dendritic rate that is proportional to the expected future discounted somatic input rate, but with a longer discount time constant (cf. Eq 12)

Since the future discounted firing rate F(x) depends on φ(Vw*(x)), it is not trivial to solve Eq 47 for φ(Vw*(x)). Similar as in the result section in Eqs 31 and 10, however, we can show for a linear [var phi] and constant λ(si) = λ that


with γeff=γ1-λα. Indeed, assuming linear [var phi] we can rewrite Eq 47 in vector notation and solve for rV to obtain



where we have assumed that λα < 1 − γ such that the series converges. Powers of A evaluate to


and thus we can rewrite Eq 50 to get,


which proves the claim in Eq 48. The effective time constant γeff=γ1-λα can be much larger than γ. In fact, for α → (1 − γ)/λ we find γeff → 1.


  1. For affine [var phi](u) = a [center dot] u + c the equivalent of Eq 50 is
    rV = (1-λA)-1A(rIc).
    In a first order approximation, the stationary rV for a non-linear [var phi] is thus a translated version of the stationary solution for a linear [var phi].
  2. For input rI = 0 and linear [var phi] we find that at the stationary point of learning rV = 0. Thus we expect that learned weights decay again, once input rI is removed.

3. Convergence of learning

For linear [var phi](u) = ϕ [center dot] u with ϕ > 0, we have rV = ϕB w and thus the learning rule in Eq 43 can be written in vector notation as


where we introduced the diagonal ‘nudging matrix’ Λ = diag(λ(s1),…,λ(sN)).

With w* = −X+ c + (1 − X+ X)w the orthogonal projection of w onto W* = {w|X w = −c}, where X+ denotes the Moore-Penrose pseudoinverse of X, we are going to show that L(w) = ½(w-w*)2 is a Lyapunov function of the dynamics in Eq 54. With y = ww* and z=ηϕBy, the temporal evolution of L is given by


where we defined the scalar product left angle bracketx,yright angle bracket = x′ Π y and the inequality follows since both A and Λ are contracting maps, i.e. Az,Az=Azz and therefore

zAΛz〉 ≤  ∥ z ∥  ∥ AΛz ∥  ≤  ∥ z ∥  ∥ Λz ∥  ≤  ∥ z ∥  ∥ z ∥  = 〈zz〉 .

Λ is contracting because it is diagonal with entries between 0 and 1 and A is contracting because


where we used the facts that 0 ≤ Aij < 1 and the row sums of A are equal to α1-γ and therefore jAij2jAij<1.

Relation between the learning rule in Eq 42 and TD(1)

For λTD = 1 and therefore PSP^=PSP˜, we can rewrite Eq 17 by expanding the delta error in Eq 19 and using the identity γPSP˜t,i=PSP˜t+1,i-PSPi(Xt+1) to find



With small parameter updates in each time step, the terms in Eq 58 approximately cancel each other when summing over subsequent terms: δtPSP˜t,i contributes +φ(Vwt*(Xt+1))PSP˜t+1,i and δt+1PSP˜t+1,i contributes -φ(Vwt+1*(Xt+1))PSP˜t+1,i. What remains are the terms in Eq 57, which resemble the terms in the learning rule in Eq 42.


We thank Christian Pozzorini, Laureline Logiaco and Kristin Völk for valuable comments on the manuscript.

Funding Statement

The work has been supported by the Swiss National Science Foundation (personal grant no. 310030L-156863 of WS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

Data Availability

All relevant data are within the paper. Additionally the source code of the simulations is publicly available at


1. Rainer G, Rao SC, Miller EK (1999) Prospective Coding for Objects in Primate Prefrontal Cortex. The Journal of Neuroscience 19: 5493–5505. [PubMed]
2. Reutimann J, Yakovlev V, Fusi S, Senn W (2004) Climbing Neuronal Activity as an Event-Based Cortical Representation of Time. Journal of Neuroscience 24: 3295–3303. doi: 10.1523/JNEUROSCI.4098-03.2004 [PubMed]
3. Pavlov IP (1927). Conditioned reflexes: an investigation of the physiological activity of the cerebral cortex. Oxford Univ. Press
4. Bangasser DA, Waxler DE, Santollo J, Shors TJ (2006) Trace conditioning and the hippocampus: the importance of contiguity. The Journal of Neuroscience 26: 8702–6. doi: 10.1523/JNEUROSCI.1742-06.2006 [PMC free article] [PubMed]
5. Li N, Chen Tw, Guo ZV, Gerfen CR, Svoboda K (2016) A motor cortex circuit for motor planning and movement. Nature 519: 51–56. doi: 10.1038/nature14178 [PubMed]
6. Liu Z, Zhou J, Li Y, Hu F, Lu Y, et al. (2014) Dorsal raphe neurons signal reward through 5-HT and glutamate. Neuron 81: 1360–74. doi: 10.1016/j.neuron.2014.02.010 [PMC free article] [PubMed]
7. Howe MW, Tierney PL, Sandberg SG, Phillips PEM, Graybiel AM (2013) Prolonged dopamine signalling in striatum signals proximity and value of distant rewards. Nature 500: 575–9. doi: 10.1038/nature12475 [PMC free article] [PubMed]
8. van der Meer MAA, Redish AD (2011) Theta phase precession in rat ventral striatum links place and reward information. The Journal of Neuroscience 31: 2843–2854. doi: 10.1523/JNEUROSCI.4869-10.2011 [PMC free article] [PubMed]
9. Quintana J, Fuster JM (1999) From perception to action: Temporal integrative functions of prefrontal and parietal neurons. Cerebral Cortex 9: 213–221. doi: 10.1093/cercor/9.3.213 [PubMed]
10. Miller EK, Erickson Ca, Desimone R (1996) Neural mechanisms of visual working memory in prefrontal cortex of the macaque. The Journal of Neuroscience 16: 5154–5167. [PubMed]
11. Sakai K, Miyashita Y (1991) Neural organization for the long-term memory of paired associates. Nature 354: 152–155. doi: 10.1038/354152a0 [PubMed]
12. Markram H, Lübke J, Frotscher M, Sakmann B (1997) Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275: 213–5. doi: 10.1126/science.275.5297.213 [PubMed]
13. Bi GQ, Poo M (2001) Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu Rev Neurosci 24: 139–66. doi: 10.1146/annurev.neuro.24.1.139 [PubMed]
14. Cichon J, Gan W (2015) Branch-specific dendritic Ca2+ spikes cause persistent synaptic plasticity. Nature 520: 180–5 doi: 10.1038/nature14251 [PMC free article] [PubMed]
15. Yagishita S, Hayashi-Takagi A, Ellis-Davies GC, Urakubo H, Ishii S, et al. (2014) A critical time window for dopamine actions on the structural plasticity of dendritic spines. Science 345: 1616–1620. doi: 10.1126/science.1255514 [PMC free article] [PubMed]
16. Pfister JP, Toyoizumi T, Barber D, Gerstner W (2006) Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Computation 18: 1318–1348. doi: 10.1162/neco.2006.18.6.1318 [PubMed]
17. Brea J, Senn W, Pfister JP (2013) Matching recall and storage in sequence learning with spiking neural networks. The Journal of Neuroscience 33: 9565–75. doi: 10.1523/JNEUROSCI.4098-12.2013 [PubMed]
18. Urbanczik R, Senn W (2014) Learning by the Dendritic Prediction of Somatic Spiking. Neuron 81: 521–528. doi: 10.1016/j.neuron.2013.11.030 [PubMed]
19. Wise S, Boussaoud D, Johnson P, Caminiti R (1997) Premotor and parietal cortex: corticocortical connectivity and combinatorial computations. Annu Rev Neurosci 20: 25–42. doi: 10.1146/annurev.neuro.20.1.25 [PubMed]
20. Jaakkola T, Jordan MI, Singh S (1994) On the Convergence of Stochastic Iterative Dynamic Programming Algorithms. Neural Computation 6: 1185–1201. doi: 10.1162/neco.1994.6.6.1185
21. Dayan P (1992) The convergence of TD(λ) for general λ. Machine Learning 8: 341–362. doi: 10.1007/BF00992701
22. Sutton RS (1988) Learning to Predict by the Methods of Temporal Differences. Machine Learning 3: 9–44. doi: 10.1023/A:1022633531479
23. Urbanczik R, Senn W (2009) Reinforcement learning in populations of spiking neurons. Nature Neuroscience 12: 250–2 doi: 10.1038/nn.2264 [PubMed]
24. Friedrich J, Urbanczik R, Senn W (2011) Spatio-temporal credit assignment in neuronal population learning. PLoS Computational Biology 7: e1002092 doi: 10.1371/journal.pcbi.1002092 [PMC free article] [PubMed]
25. Gavornik J, Shuler M, Loewenstein Y, Bear M, Shouval HZ (2009) Learning reward timing in cortex through reward dependent expression of synaptic plasticity. Supporting Material. Proceedings of the National Academy of Sciences 106: 6826 doi: 10.1073/pnas.0901835106 [PubMed]
26. He K, Huertas M, Hong S, Tie X, Hell J, et al. (2015) Distinct Eligibility Traces for LTP and LTD in Cortical Synapses. Neuron. doi: 10.1016/j.neuron.2015.09.037 [PMC free article] [PubMed]
27. Solomon PR, Vander Schaaf ER, Thompson RF, Weisz DJ (1986) Hippocampus and trace conditioning of the rabbit’s classically conditioned nictitating membrane response. Behavioral Neuroscience 100: 729–744. doi: 10.1037/0735-7044.100.5.729 [PubMed]
28. Churchland MM, Yu BM, Cunningham JP, Sugrue LP, Cohen MR, et al. (2010) Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature neuroscience 13: 369–78. doi: 10.1038/nn.2501 [PMC free article] [PubMed]
29. Churchland MM, Abbott LF (2012) Two layers of neural variability. Nature Neuroscience 15: 1472–1474. doi: 10.1038/nn.3247 [PubMed]
30. Laje R, Buonomano DV (2013) Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience 16: 925–933. doi: 10.1038/nn.3405 [PMC free article] [PubMed]
31. Hennequin G, Vogels T, Gerstner W (2014) Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements. Neuron 82: 1394–1406. doi: 10.1016/j.neuron.2014.04.045 [PubMed]
32. Ludvig EEA, Sutton RRS, Verbeek E, Kehoe EJ (2009) A computational model of hippocampal function in trace conditioning. In: Advances in Neural Information Processing Systems 21, Curran Associates, Inc. pp. 993–1000.
33. Xu S, Jiang W, Poo MM, Dan Y (2012) Activity recall in a visual cortical ensemble. Nature Neuroscience 15: 449–455. doi: 10.1038/nn.3036 [PMC free article] [PubMed]
34. Davidson TJ, Kloosterman F, Wilson Ma (2009) Hippocampal Replay of Extended Experience. Neuron 63: 497–507. doi: 10.1016/j.neuron.2009.07.027 [PMC free article] [PubMed]
35. Ziegler L, Zenke F, Kastner DB, Gerstner W (2015) Synaptic Consolidation: From Synapses to Behavioral Modeling. Journal of Neuroscience 35: 1319–1334. doi: 10.1523/JNEUROSCI.3989-14.2015 [PubMed]
36. Clopath C, Ziegler L, Vasilaki E, Büsing L, Gerstner W (2008) Tag-Trigger-Consolidation: A Model of Early and Late Long-Term-Potentiation and Depression. PLoS Computational Biology 4: e1000248 doi: 10.1371/journal.pcbi.1000248 [PMC free article] [PubMed]
37. Pfister JP, Gerstner W (2006) Triplets of spikes in a model of spike timing-dependent plasticity. The Journal of Neuroscience 26: 9673–82. doi: 10.1523/JNEUROSCI.1425-06.2006 [PubMed]
38. Zhang JC, Lau PM, Bi GQ (2009) Gain in sensitivity and loss in temporal contrast of STDP by dopaminergic modulation at hippocampal synapses. Proceedings of the National Academy of Sciences of the United States of America 106: 13028–13033. doi: 10.1073/pnas.0900546106 [PubMed]
39. Kolodziejski C, Porr B, Wörgötter F (2009) On the asymptotic equivalence between differential Hebbian and temporal difference learning. Neural Computation 21: 1173–1202. doi: 10.1162/neco.2008.04-08-750 [PubMed]
40. Potjans W, Morrison A, Diesmann M (2009) A Spiking Neural Network Model of an Actor-Critic Agent. Neural Computation 21: 301–339. doi: 10.1162/neco.2008.08-07-593 [PubMed]
41. Frémaux N, Sprekeler H, Gerstner W (2013) Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS computational biology 9: e1003024 doi: 10.1371/journal.pcbi.1003024 [PMC free article] [PubMed]
42. Maass W, Natschläger T, Markram H (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation 14: 2531–60. doi: 10.1162/089976602760407955 [PubMed]
43. Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304: 78–80. doi: 10.1126/science.1091277 [PubMed]
44. Drew PJ, Abbott LF (2006) Extending the effects of spike-timing-dependent plasticity to behavioral timescales. Proceedings of the National Academy of Sciences of the United States of America 103: 8876–81. doi: 10.1073/pnas.0600676103 [PubMed]
45. Huertas MA, Shuler XMGH, Shouval HZ (2015) A Simple Network Architecture Accounts for Diverse Reward Time Responses in Primary Visual Cortex 35: 12659–12672. [PMC free article] [PubMed]
46. Latimer KW, Yates JL, Meister MLR, Huk AC, Pillow JW (2015) Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 349: 184–187. doi: 10.1126/science.aaa4056 [PMC free article] [PubMed]
47. Palmer SE, Marre O, Berry MJ, Bialek W (2015) Predictive information in a sensory population. PNAS 112:6908–13. doi: 10.1073/pnas.1506855112 [PubMed]
48. Kushner HJ, Yin GG (2003) Stochastic Approximation and Recursive Algorithms and Applications, volume 35 Springer Science & Business Media.

Articles from PLoS Computational Biology are provided here courtesy of Public Library of Science