Home | About | Journals | Submit | Contact Us | Français |

**|**Front Comput Neurosci**|**v.4; 2010**|**PMC2947928

Formats

Article sections

- Abstract
- Introduction
- Models
- Analysis of Synaptic Weight Dynamics
- Emergence of Network Structure and Functional Implications
- Conclusion
- Conflict of Interest Statement
- References

Authors

Related links

Front Comput Neurosci. 2010; 4: 23.

PMCID: PMC2947928

Edited by: Wulfram Gerstner, Ecole Polytechnique Fédérale de Lausanne, Switzerland

Reviewed by: Christian Leibold, Ludwig Maximilians University, Germany; Guillaume Hennequin, Brain-Mind Institute, Switzerland

*Correspondence: Matthieu Gilson, Lab for Neural Circuit Theory, Riken Brain Science Institute, Hirosawa 2-1, Wako-shi, Saitama 351-0198, Japan. e-mail: ua.ude.bleminu@mnoslig

Received 2010 March 1; Accepted 2010 June 28.

Copyright © 2010 Gilson, Burkitt and van Hemmen.

This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

This article has been cited by other articles in PMC.

Recent results about spike-timing-dependent plasticity (STDP) in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented.

Ten years after spike-timing-dependent plasticity (STDP) appeared (Gerstner et al., 1996; Markram et al., 1997), a profusion of publications have investigated its physiological basis and functional implications, both on experimental and theoretical grounds (for reviews, see Dan and Poo, 2006; Caporale and Dan, 2008; Morrison et al., 2008). STDP has led to a re-evaluation by the research community about Hebbian learning (Hebb, 1949), in the sense of focusing on causality between input and output spike trains, as an underlying mechanism for memory. Following preliminary studies that suggested the concept of STDP (Levy and Steward, 1983; Gerstner et al., 1993), the model initially proposed by Gerstner et al. (1996) and first observed by Markram et al. (1997) based on a pair of pre- and postsynaptic spikes has been extended to incorporate additional physiological mechanisms and account for more recent experimental data. This includes, for example, biophysical models based on calcium channels (Hartley et al., 2006; Graupner and Brunel, 2007; Zou and Destexhe, 2007) and more elaborate experimental stimulation protocols such as triplets of spikes (Sjöström et al., 2001; Froemke and Dan, 2002; Froemke et al., 2006; Pfister and Gerstner, 2006; Appleby and Elliott, 2007). In order to investigate the functional implications of STDP, previous mathematical studies (Kempter et al., 1999; van Rossum et al., 2000; Gütig et al., 2003) have used simpler phenomenological models to relate the learning dynamics to the learning parameters and input stimulation. However, a lack of theoretical results even for the original pairwise STDP with *recurrently* connected neurons persisted until recently, mainly because of the difficulty of incorporating the effect of feedback loops in the learning dynamics. The present paper reviews recent results about the weight dynamics induced by STDP in recurrent network architectures with a focus on the emergence of network structure. Note that we constrain STDP to excitatory glutamatergic synapses, although there is some experimental evidence for a similar property for inhibitory GABAergic connections (Woodin et al., 2003; Tzounopoulos et al., 2004).

Due to its temporal resolution, STDP can lead to input selectivity based on spiking information at the scale of milliseconds, namely spike-time correlations (Gerstner et al., 1996; Kempter et al., 1999). To achieve this, STDP can regulate the output firing rate in a regime that is neither quiescent nor saturated by means of enforcing stability upon the mean incoming synaptic weight and in this way establishing a homeostatic equilibrium (Kempter et al., 2001). In addition, a proper weight specialization requires STDP to generate competition between individual weights (Kempter et al., 1999; van Rossum et al., 2000; Gütig et al., 2003). In the case of several similar input pathways, a desirable outcome is that the weight selection corresponds to a splitting between (but not within) the functional input pools, hence performing symmetry breaking of an initially homogeneous weight distribution that reflects the synaptic input structure (Kempter et al., 1999; Song and Abbott, 2001; Gütig et al., 2003; Meffin et al., 2006).

In this review, we examine how the concepts described above extend from a single neuron (or feed-forward architecture) to recurrent networks, focusing on how the corresponding weight dynamics differs in both cases. The learning dynamics causes synaptic weights to be either potentiated or depressed. Accordingly, STDP can lead to the evolution of different network structures that depend on both the correlation structure of the external inputs and the activity of the network In particular, we relate our recent body of analytical work shortcite (Gilson et al., 2009a–c, 2010) to other studies with a view to illustrating how theory applies to the corresponding network configurations.

In the present paper, we use a model of STDP that contains two key features: the dependence upon relative timing for pairs of spikes via a temporally asymmetric learning window *W* (Gerstner et al., 1996; Markram et al., 1997; Kempter et al., 1999), as illustrated in Figure Figure1,1, and upon the current strength of the synaptic weight (Bi and Poo, 1998; van Rossum et al., 2000; Gütig et al., 2003; Morrison et al., 2007; Gilson et al., 2010). In this review we will therefore consider the STDP learning window as a function of two variables: *W*(*J _{ij}*,

In addition to the learning window function *W*, a number of studies (Kempter et al., 1999; Gilson et al., 2009a) have included rate-based terms *w*^{in} and *w*^{out}, namely modifications of the weights for each pre- and postsynaptic spike (Sejnowski, 1977; Bienenstock et al., 1982). This choice leads to a general form of synaptic plasticity (van Hemmen, 2001; Gerstner and Kistler, 2002) that incorporates changes for both single spikes and pairs of spikes. The choice of the Poisson neuron model with temporally inhomogeneous firing rate and with a linear input–output function for the firing rates makes it possible to incorporate such rate-based terms in order to obtain homeostasis (Turrigiano, 2008).

In order to study the evolution of plastic weights in a given network configuration, it is necessary to define the stimulating inputs. For pairwise STDP, spiking information is conveyed in the firing rates and cross-correlograms, and Poisson-like spiking is often used to reproduce the variability observed in experiments (Gerstner et al., 1996; Kempter et al., 1999; Song et al., 2000; Gütig et al., 2003). Spike coordination or rate covariation can be combined to generate correlated spike trains (Staude et al., 2008). The present review will focus on narrowly correlated inputs (almost synchronous) that are partitioned in pools; in this configuration, correlated inputs belong to a common pathway (e.g., monocular visual processing). We will also discuss more elaborate input correlation structures that use narrow spike-time correlations (Krumin and Shoham, 2009; Macke et al., 2009), oscillatory inputs (Marinaro et al., 2007), and spike patterns (Masquelier et al., 2008). Our series of papers has also made minimal assumptions about the network topology, namely mainly considering recurrent connectivity to be homogeneous. The starting situation consists in unorganized (input and/or recurrent) weights that are randomly distributed around a given value.

Once the input and network configuration is fixed, it is necessary to evaluate the spiking activity in the network in order to predict the evolution of the weights. The Poisson neuron model has proven to be quite a valuable tool (Kempter et al., 1999; Gütig et al., 2003; Burkitt et al., 2007; Gilson et al., 2009a,d), although recent progress has been made toward a similar framework for integrate-and-fire neurons (Moreno-Bote et al., 2008). In the Poisson neuron model, the output firing mechanism for a given neuron *i* is approximated by an *in*homogeneous Poisson process with rate function or intensity λ* _{i}*(

$${\lambda}_{i}\left(t\right)={\lambda}_{0}+{\displaystyle \sum _{j,n}{J}_{ij}\left({t}_{j}^{n}\right)\u220a}\left[t-{t}_{j}^{n}-\left({d}_{ij}^{\text{den}}+{d}_{ij}^{\text{ax}}\right)\right],$$

(1)

where ${t}_{j}^{n}$, 1 ≤*n*, are the spike times for neuron *j*, and λ_{0} describes background excitation/inhibition from synapses that are not considered in detail. The kernel function ε describes the time course of the postsynaptic response (chosen identical for all synapses here), such as an alpha function. We also discriminate between axonal and dendritic components for the conduction delay, cf. Figure Figure1A.1A. Although only coarsely approximating real neuronal firing mechanisms, this model transmits input spike-time correlations and leads to a tractable mathematical formulation of the input–output correlogram, which allows the analytical description of the evolution of the plastic weights.

Before starting we quickly explain what STDP means and how Poisson neurons function in the present context. Then we focus on the asymptotics of an analytic description for the development of synaptic strengths by means of a set of coupled differential equations and introduce the notion of “almost-additive” STDP. In so doing we will also see how a population of recursively connected neurons influences the synaptic development in the population as a whole so that one gets a “grouping” of the synapses on the neurons. Input by itself and input in conjunction with or, more interestingly, versus recurrence play an important role in this game. Finally, we will see how the *form* of the learning window influences the neuron-to-input and neuron-to-neuron spike-time correlations. This is a preparation of the next section where we will analyze emerging network structures and their functional implications.

The barn owl (*Tyto alba*) is able to determine the prey direction in the dark by measuring interaural time differences (ITDs) with an azimuthal accuracy of 1–2° corresponding to a temporal precision of a few microseconds, a process of binaural sound localization. The first place in the brain where binaural signals are combined to ITDs is the laminar nucleus. A temporal precision as low as a few microseconds was hailed by Konishi (1993) as a paradox – and rightly so since at a first sight it contradicts the slowness of the neuronal “hardware,” viz., membrane time constants of the order of 200μs. In addition, transmission delays from the ears to laminar nucleus scatter between 2 and 3ms (Carr and Konishi, 1990) and are thus in an interval that greatly exceeds the period of the relevant oscillations (100–500μs). The key to the solution (Gerstner et al., 1996) is a Hebbian learning process that tunes the hardware so that only synapses and, hence, axonal connections with the right timing survive. Genetic coding is implausible because 3weeks after hatching, when the head is full-grown, the young barn owl cannot perform azimuthal sound localization. Three weeks later it can. So what happens in between? The solution to the paradox involves a careful study of how synapses develop during ontogeny (Gerstner et al., 1996; Kempter et al., 1999). The inputs provided by many synapses decide what a neuron does but, once it has fired, the neuron determines whether each of the synaptic efficacies will increase or decrease, a process governed by the synaptic learning window, a notion that will be introduced shortly. Each of the terms below in Eq. 2 has a neurobiological origin. The process they describe is what we call *infinitesimal learning* in that synaptic increments and decrements are small. Consequently it takes quite a while before the organism has built up a “noticeable” effect. Though processes that happen in the long term are not fully understood yet, their effect is well described by Eq. 2.

For the sake of definiteness we are going to study waxing and waning of synaptic strengths associated with a *single* neuron *i*; cf. Figure Figure2A.2A. Here we ignore the weight dependence to focus on the temporal aspect and thus use *W*(·,*u*) as the STDP learning window function. The 1 ≤*j*≤*N* synapses provide their input at times ${t}_{j}^{n}$, where *n* is a label denoting the sequential spikes. The firing times of the neuron are denoted by ${t}_{i}^{m}$, it being understood that *m* is a label like *n*. Given the firing times, the change Δ*J _{ij}*(

$$\Delta {J}_{ij}(t)\eta \left[{\displaystyle \sum _{t-T\le {t}_{j}^{n}\le t}{w}^{in}+{\displaystyle \sum _{t-T\le {t}_{j}^{m}\le t}{w}^{out}}+{\displaystyle \sum _{t-T\le {t}_{j}^{n},{t}_{j}^{m}\le t}W\left(\cdot ,{t}_{j}^{n}-{t}_{i}^{m}\right)}}\right].$$

(2)

Here the firing times ${t}_{i}^{m}$ of the postsynaptic neuron may, and in general will, depend on *J _{ij}*. We now focus on the individual terms. The prefactor 0 <η1 reminds us explicitly of learning being slow on a neuronal time scale. This condition is usually referred to as the “adiabatic hypothesis.” It holds in numerous biological situations and has been a mainstay of computational neuroscience ever since. It may also play a beneficial role in an applied context. If it is does not hold, a numerical implementation of the learning rule (Eq. 2) is straightforward, but an analytical treatment is not.

Each incoming spike and each action potential of the postsynaptic neuron change the synaptic efficacy by η*w*^{in} and η*w*^{out}, respectively. The last term in Eq. 2 represents the *learning window* *W*(·,*u*), which indicates the synaptic change in dependence upon the time difference $u={t}_{j}^{n}-{t}_{i}^{m}$ between an incoming spike ${t}_{j}^{n}$ and an outgoing spike ${t}_{i}^{m}$. When the former precedes the latter, we have $u<0\iff {t}_{j}^{n}<{t}_{i}^{m}$, and the result is *W*(·,*u*) >0, implying potentiation. This seems reasonable since NMDA receptors, which are important for long-term potentiation (LTP), need a strongly positive membrane voltage to become “accessible” by loosing the Mg^{2+} ions that block their “gate.” A postsynaptic action potential induces a fast retrograde “spike” doing exactly this (Stuart et al., 1997). Because the presynaptic spike arrived slightly earlier, neurotransmitter is waiting to obtain access, which is allowed after the Mg^{2+} ions are gone. The result is Ca^{2+} influx. On the other hand, if the incoming spike comes “too late,” then *u*>0 and *W*(·,*u*) <0, implying depression – in agreement with a general rule in politics, discovered two decades ago: “Those who come too late shall be punished.” In neurobiological terms, there is no neurotransmitter waiting to be admitted.

Since Poisson neurons (Kempter et al., 1999; van Hemmen, 2001) are cardinal to obtaining analytically exact solutions and at the same time effortlessly reflect uncertainty in response to input stimuli, which we then interpret as “stochastic,” we first quickly discuss what “inhomogeneous Poisson” is all about.

A general Poisson process with intensity λ* _{i}*(

- (i) the probability of finding a spike between
*t*and*t*+Δ*t*is λ(_{i}*t*)Δ*t*, - (ii) the probability of finding two or more spikes there is
*o*(Δ*t*), - (iii) the process has independent increments, i.e., events in disjoint intervals are independent.

In a neuronal context it is fair to call property (ii) a mathematical realization of a neuron's refractory behavior. Property (iii) makes it all exactly soluble (cf. van Hemmen, 2001, App. B). When the “membrane potential” λ* _{i}*(

$${\lambda}_{i}^{\text{clipped}}={\lambda}_{i}(t)\Theta \left[{\lambda}_{i}(t)-{\vartheta}_{1}\right],$$

(3)

where _{1} is a given threshold and Θ is the Heaviside step function [Θ(*t*)=0 for *t*<0 and Θ(*t*)=1 for *t*≥ 0], is a suitable substitute that also allows an exact disentanglement (Kistler and van Hemmen, 2000). For λ* _{i}*(

We now turn to general pre- and postsynaptic spike trains, with no reference to a neuronal model or a specific input structure. The only assumption here is that the learning is sufficiently slow so that averaging over the spike trains can be performed (van Hemmen, 2001); note that significant weight evolution over tens of minutes still satisfies this requirement. For pairwise (possibly weight-dependent) STDP, the evolution of the mean weight averaged over all trajectories (drift of the stochastic process) results in a learning-dynamics equation of the general form

$${\dot{J}}_{ij}=f\left({J}_{ij};{\nu}_{j},{\nu}_{i}\right)+g\left({J}_{ij};{C}_{ij},{d}_{ij}^{\text{ax}}-{d}_{ij}^{\text{b}}\right),$$

(4)

where the dependence of the variables upon time has been omitted. In Eq. 4 the spiking information conveyed by the spike trains *S _{i}*(

$${\nu}_{i}(t):=\frac{1}{T}{\displaystyle {\int}_{t-T}^{t}\langle {S}_{i}(t\prime )\rangle \text{d}t\prime}$$

(5)

and the spike-time covariance coefficient

$$\begin{array}{l}{C}_{ij}(t,u):=\frac{1}{T}{\displaystyle {\int}_{t-T}^{t}\langle {S}_{i}(t\prime ){S}_{j}(t\prime +u)\rangle \text{d}t\prime}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}-\frac{1}{T}{\displaystyle {\int}_{t-T}^{t}\langle {S}_{i}(t\prime )\rangle \text{d}t\prime \frac{1}{T}}{\displaystyle {\int}_{t-T}^{t}\langle {S}_{j}(t\u2033+u)\rangle \text{d}t\u2033.}\end{array}$$

(6)

This separation of time scales (involving the averaging duration *T*) is dictated by the STDP learning window *W* (cf. Figure Figure1):1): typically, phenomena “faster” than 10Hz (i.e., 100ms) will be captured by *W* as spike effects through *C _{ij}*, such as oscillatory activity (Marinaro et al., 2007) and spike patterns (Masquelier et al., 2008). The formulation in Eq. 6 is slightly more general than that used by Gerstner and Kistler (2002) and Gilson et al. (2009a); it can account for covariation of underlying rate functions (Sprekeler et al., 2007) as well as (stochastic) spike coordination (Kempter et al., 1999; Gütig et al., 2003; see also Staude et al., 2008). Finally, we note that the averaging … in Eqs 5 and 6 comes for free as the “learning time”

For a given network configuration, predicting the evolution of the weight distribution requires the evaluation of the neuronal variables involved in Eq. 4 as functions of the parameters for the stimulating inputs. For all network neurons, the output spike trains are constrained by the neuron model and input spike trains: the key to the analysis is the derivation of self-consistency equations that describe this relationship for the firing rates and spike-time correlations. In particular, recurrent connectivity implies non-linearity in the network input–output function for the neuronal firing rates. The interplay between the spiking activity and network connectivity, where the latter is modified by plasticity on a slower time scale, is crucial to understanding the effect of STDP. A network of Poisson neurons with input weights *K* and recurrent weights *J* as in Figure Figure2B2B leads to the following system of matrix equations (Gilson et al., 2009a,d):

$$\nu ={[1-J]}^{-1}[{\lambda}_{0}e+K\stackrel{\wedge}{\nu}],$$

(7)

$$F(\cdot ,u)={[1-J]}^{-1}K{\stackrel{\wedge}{C}}^{\psi}(u),$$

$$C(\cdot ,u)={[1-J]}^{-1}K{\stackrel{\wedge}{C}}^{\zeta}(u){K}^{\text{T}}{[1-J]}^{-1\text{T}}.$$

The vector of neuronal firing rates ν is expressed in terms of the input firing rates $\widehat{\nu}$ and the weight matrices *K* and *J*. Vector **e** has all element equal to ones and the superscript **T** denotes the transposition. Matrices *C* and *F* respectively contain the neuron-to-neuron and neuron-to-input spike-time covariances, cf. Eq. 6. Only their dependence upon *u* is considered here and they are expressed in terms of the input-to-input covariance matrix *Ĉ* convolved with the following functions (indicated by the superscript):

$$\psi (u)\simeq \u220a\left(-u+{d}_{ij}^{\text{ax}}+{d}_{ij}^{\text{den}}\right),$$

(8)

$$\zeta \left(u\right)\simeq {\displaystyle \int \u220a\left(u+r\right)\u220a\left(r\right)}dr,$$

where is the postsynaptic response kernel for the Poisson neuron; cf. Eq. 1. The dependence upon time *t* has been omitted, as the time-averaged firing rates and spike-time covariances practically only vary at the same pace as the plastic weights. In this framework, the effect of the recurrence is taken into account in the inverse of the matrix 1 −*J* in Eq. 7. For the rates ν and the neuron-to-input covariances *F*, this leads to a linear feedback, whereas the dependence is quadratic-like for the neuron-to-neuron covariances *C*. These equations allow the analysis of the learning equation (Eq. 4) for recurrent weights *J*, as well as an equivalent expression for input weights *K* with the neuron-to-input spike-time covariance *F* in place of *C*.

For both a single neuron and recurrently connected network, the learning equation (Eq. 4) can lead to a double dynamics that operates upon the incoming synaptic weights for each neuron (Kempter et al., 1999; Gilson et al., 2009a,d, 2010):

- a partial (homeostatic) equilibrium that stabilizes the mean incoming weight, and hence the output firing rate, through the constraint $f({J}_{ij};{\nu}_{j},{\nu}_{i})\simeq 0$ for each neuron
*i*; - competition between individual weights based on the spike-time covariances embedded in $g({J}_{ij};{C}_{ij},{d}_{ij}^{\text{ax}}-{d}_{ij}^{\text{b}})$, which can result in splitting the weight distribution.

Note that this discrimination between rate and spike effects is valid irrespective of the neuron model. The following analysis, which is based on the Poisson neuron model, can be extended to more elaborate models, such as the integrate-and-fire neuron when it is in a (roughly) linear input-output regime.

Several features have been used to ensure a stable and realizable homeostatic equilibrium. That is, the mean incoming weight *J*_{av} for each neuron has a stable fixed point between the bounds [*J*_{min}, *J*_{max}]. This is important so as to ensure proper weight specialization in that, if the fixed point is not realizable (outside the bounds) or unstable, all weights will tend to cluster at one of the bounds and no effective selection is then possible, as is illustrated by Figure Figure3A.3A. On the other hand, in Figure Figure3B3B splitting of the weights occurs on each side of the stable value for the mean weight (thick black line). If rate-based terms can be added to obtain a polynomial form of *f* (Kempter et al., 1999; Gilson et al., 2009a,d), weight dependence can also be chosen to bring stability (Gütig et al., 2003; Gilson et al., 2010), as is illustrated in Figure Figure3.3. Such features preserve the local character of the plasticity rule and homeostasis is then a consequence of local plasticity (Kempter et al., 2001). In contrast, additional mechanisms such as synaptic scaling (or normalization) can be enforced to constrain the mean incoming weight (van Rossum et al., 2000). In any case, only if $f({J}_{ij};{\nu}_{j},{\nu}_{i})\simeq 0$ for all synapses *j*→*i* is the weight specialization determined by the spike-time covariance. Otherwise, firing rates are likely to take part in the weight competition and the dichotomy between rate and spike effects may not be effective.

Lack of proper homeostatic stability can lead to dramatic changes in the spiking activity when slightly modifying some parameters in simulations, such as an “explosive” behavior where the neuron saturates at a very high firing rate (Song et al., 2000). Non-linear activation mechanisms (e.g., sigmoidal rate function, integrate-and-fire) may play a role in the weight dynamics and possibly affect the correlograms. In particular, stable homeostatic equilibrium can be obtained without use of rate-based terms *w*^{in} and *w*^{out} for additive STDP through integrate-and-fire neurons (this is not possible with Poisson neuron), but the range of adequate learning parameters, in particular the value of $\tilde{W}$, was found to be smaller (Song et al., 2000) than when using rate-based plasticity terms (Kempter et al., 1999; Gilson et al., 2009a).

After the homeostatic equilibrium has been reached, spike-time correlations become the dominating term in the learning-dynamics equation (Eq. 4) and hence determine the subsequent weight specialization. The rule of thumb for the weight splitting after reaching the homeostatic equilibrium is that synapses *j*→*i* with larger coefficients $g({J}_{ij}\simeq {J}_{\text{av}}^{*};{C}_{ij},{d}_{ij}^{\text{ax}}-{d}_{ij}^{\text{b}})$ will be potentiated at the expense of the others. The function *g* involves the convolution of the correlogram *C _{ij}*, such as those in Figures Figures4B,C,4B,C, with the STDP function

$$g\left({J}_{ij};{C}_{ij},\Delta {d}_{ij}\right)={\displaystyle {\int}_{-\infty}^{+\infty}W\left({J}_{ij},u+\Delta {d}_{ij}\right){C}_{ij}(t,u)\text{d}u.}$$

(9)

This implies that the STDP learning function *W* is shifted by the difference between the axonal and the dendritic back-propagation delays Δ*d _{ij}* in Eq. 4,

$$\Delta {d}_{ij}:={d}_{ij}^{\text{ax}}-{d}_{ij}^{\text{b}}.$$

(10)

Hence a purely dendritic delay implies Δ*d _{ij}*<0, which is equivalent to shifting the STDP learning window function (solid line) in Figure Figure4D4D to the right (dashed line), i.e., toward more potentiation. Conversely, a purely axonal delay shifts the curve to the left (dashed-dotted line) and thus toward more depression since Δ

The weight dependence ensuing from STDP modulates the weight specialization. This can lead to either a unimodal or bimodal distribution at the end of the learning epoch (van Rossum et al., 2000; Gütig et al., 2003; Gilson et al., 2010); see Figures Figures3C,B,3C,B, respectively. We will refer to *almost-additive STDP* in the case where the weight-dependence is small, i.e., small values of the μ>0 parameter of Gütig et al. (2003). Almost-additive STDP can generate effective weight competition including partial stability whenever the weight dependence leads to more depression and/or less potentiation for higher values of the current weight and the homeostatic equilibrium (Gilson et al., 2010). In general, stronger weight dependence implies more stability for both the mean incoming weight and individual weights, whereas competition is more effective for almost-additive STDP.

An interesting example illustrates the difference in the weight dynamics when stepping from a single neuron to a recurrently connected network. We consider neurons that are excited by external synaptic inputs with narrow spike-time correlations (i.e., almost-synchronous spiking), as illustrated in Figure Figure4A.4A. We also assume homogeneous input and recurrent connectivity, which can be partial. We compare the effect of STDP for an input connection from an external input to a given network neuron on the one hand, and a recurrent connection between two neurons on the other hand. In so doing we recall that a positive (resp. negative) value for the convolution in Eq. 9 implies potentiation (depression) of the more correlated input pathways (Gilson et al., 2009a) and outgoing recurrent connections for more correlated neuronal groups (Gilson et al., 2009d). Typical correlograms are illustrated in Figures Figures4B,C4B,C for input and recurrent connections, respectively. For the input connection the distribution is clearly non-symmetrical, whereas it is roughly symmetrical for recurrently connected neurons (in a homogeneous network). The shifted STDP curve *W* in Figure Figure4D4D depends on the delays ${d}_{ij}^{\text{ax}}$ and ${d}_{ij}^{\text{b}}$, which we assumed to be of the same order as ${d}_{ij}^{\text{den}}$. The correlograms in Figures Figures4B,C4B,C can be evaluated with first-order approximations by the following functions ψ and ζ in Eq. 8, respectively. In Figure Figure4E,4E, the theoretical correlogram ψ that corresponds to Figure Figure4B4B is shifted to the left by the sum of the delays ${d}_{ij}^{\text{ax}}+{d}_{ij}^{\text{den}}$; it follows that the curve always overlaps with the potentiation side of *W*, irrespective of the axonal/dendritic ratio (assuming ${d}_{ij}^{\text{den}}$ and ${d}_{ij}^{\text{b}}$ to be of the same order). However, for the recurrent connection in Figure Figure4F,4F, the delays affect only higher-order approximations of the correlogram ζ in Eq. 8, namely by increasing the spread of the distribution (thin lines); the distribution of remains symmetrical and similar to the simulated distribution in Figure Figure4C,4C, while the convolution with *W* can give a positive or negative value, depending upon the shift of *W*.

The effect of the delay difference Δ*d _{ij}* upon the dynamical evolution of the input and recurrent weights induced by STDP becomes clearer in Figure Figure5,5, where the dashed and dashed-dotted curves represent the convolutions in Eq. 9 for the two corresponding correlograms plotted as a function of Δ

The shape of *W* around 0 is also important to determine the sign of the convolution in Eq. 9: stronger potentiation than depression strengthens self-feedback for correlated groups of neurons and thus favors synchrony, as illustrated in Figures Figures5A,B.5A,B. In general this effect is more pronounced when using a non-continuous *W* function since the discrepancies around *u*=0 are larger; see Figures Figures5C,D.5C,D. In conclusion, a suitable typical choice for *W* involves a longer time constant for depression but higher amplitude for potentiation, which is in agreement with previous experimental results (Bi and Poo, 1998).

Another difference between plasticity for input and recurrent connections lies in stability issues for the spiking activity: (“soft” or “hard”) bounds on the weights must be chosen such that recurrent feedback does not become too strong (in particular, at the homeostatic equilibrium). Otherwise, potentiation of synapses may lead to an “explosive” spiking behavior, where all neurons saturate at a high firing rate (Morrison et al., 2007).

The analysis presented above does not depend on precise quantitative values, but rather the conclusions depend on qualitative properties, namely the signs of functions within some range. The methods are also valid for non-strictly pairwise STDP. For restrictions on the spike interactions that contribute to STDP (Sjöström et al., 2001; Izhikevich and Desai, 2003; Burkitt et al., 2004), an effective correlogram can be evaluated to be convolved with the STDP window function *W*. When the interaction restrictions do not modify the global shape of the correlograms, the predicted trends for the weight specialization should still hold and the effect of parameters such as dendritic/axonal delays should be similar for more elaborate STDP rules.

In other words, only non-linearities that would significantly change the qualitative properties of the correlograms in Figure Figure44 are important; for example, those that alter their (non-)symmetrical character. It has been shown that the rate-based contribution for STDP can be significantly affected by the restriction scheme (Izhikevich and Desai, 2003; Burkitt et al., 2004); STDP can then exhibit a BCM-like behavior (Bienenstock et al., 1982) with respect to the input firing rate and lead to depression below a given threshold and potentiation above that threshold.

Further work is necessary to understand the implications of interaction restrictions for an arbitrary correlation structure. More biophysically accurate plasticity rules that exhibit a STDP profile for spike pairs (Graupner and Brunel, 2007; Zou and Destexhe, 2007) are also expected to exhibit qualitatively similar behavior when stimulated by spike trains with pairwise spike-time correlations. The rule proposed by Appleby and Elliott (2006) is an exception where higher-order correlations are necessary to obtain competition. Other synaptic plasticity rules involving a temporal learning window, such as “burst-time-dependent plasticity” that corresponds to the longer time scale of a second (Butts et al., 2007), can be analyzed using the same framework and similar dynamical ingredients are expected to participate to the weight evolution (stabilization and competition).

Finally, we illustrate how the interplay between STDP, connectivity topology, and input correlation *structure* can lead to the emergence of synaptic structure in a recurrently connected network. First we describe how the weight dynamics presented above can shape synaptic pathways using the simple example of narrowly (or delta) correlated inputs. Second we extend the analysis to more elaborate input structures, such as oscillatory spiking activity. We also discuss the link between these results and the resulting processing of spiking information performed by trained neurons and networks.

We start by focusing on a specific configuration of the external stimulating inputs, viz., two pools of external inputs that can have within-pool, but no between-pool spike-time correlations; see Figure Figure66 where filled bottom circles indicate input pools with narrowly distributed spike-time correlations, in a similar fashion to Figure Figure3.3. Each pool represents a functional pathway and the spiking information is mainly contained in the spike-time correlations between pairs of neurons. This scheme has been used in many studies to examine input selectivity, such as how a neuron can become sensitive to only a portion of its stimulating inputs, hence specializing to a given pathway (Kempter et al., 1999; Song and Abbott, 2001; Gütig et al., 2003).

For a single neuron, input pathways with a narrow spike-time correlation distribution are potentiated by Hebbian STDP (Kempter et al., 1999; Song et al., 2000), as explained in Section “Input Versus Recurrent Connections.” This conclusion still holds when we train input connections for recurrently connected neurons, as illustrated in Figure Figure6A.6A. When the spike-time correlations have a broader distribution, their widths matter and the more peaked pool is selected (Kistler and van Hemmen, 2000).

When two input pathways have similar correlation strengths, additive-like STDP induces sufficient competition to lead to a winner-take-all situation where only one pool is selected (Song and Abbott, 2001; Gütig et al., 2003). When using STDP without the rate-based terms *w*^{in} and *w*^{out}, a stricter condition on the weight dependence has been found to ensure a similar behavior (Meffin et al., 2006). Then two STDP modes can be distinguished depending on the strength of weight dependence: Either the competition is sufficiently strong to induce a splitting of the weight distribution (additive-like STDP) or the asymptotic weight remains unimodal (Gütig et al., 2003).

The limit between the above two classes of behavior also depends on the strength of the input correlation, so there exists a parameter range for which proper weight specialization only occurs when there is spiking information in the sense of spike-time correlations for two or more input pools (Meffin et al., 2006; Gilson et al., 2010). During this symmetry breaking of initially homogeneous input connections, recurrent connections may play a role (irrespective of their plasticity) so that recurrently connected neurons with excitatory synapses tend to specialize preferably to the same input pathway (Gilson et al., 2009b); this effect is more pronounced for stronger recurrent connections. In Figure Figure6B,6B, only one of the two input correlated pools is selected (with 50% probability in the case of two pools). This group specialization is important to obtain consistent input selectivity within areas with strong local feedback, and not “salt-and-pepper” organization where neurons would become selective independently of each other.

Specialization within a network with recurrent connections requires that neurons receive different inputs in terms of firing rates and correlations (Gilson et al., 2009d), which can be obtained after the emergence of input selectivity. As mentioned above, different learning parameters can lead to a strengthening or weakening of feedback within neuronal groups when they receive correlated input. This phenomenon is illustrated in Figure Figure6C6C by the right and left arrows ( and ) that correspond to Figures Figures5A,B,5A,B, respectively. In other words, for recurrent delays, a prominent dendritic component favors emergence of strongly connected neuronal groups, whereas a prominent axonal component leads to the converse evolution. Likewise, parameters corresponding to strengthening feedback lead to dominance by the group that receives stronger correlated input than the “other” neurons, which results in the emergence of a feed-forward pathway in an initially homogeneous recurrent network, as illustrated in Figure Figure66D.

The above conclusions describe conditions on the parameters for which the results presented by Song and Abbott (2001) are valid: the rewiring of recurrent connections corresponded to favoring groups that receive more correlated inputs; cf. Figures Figures6C(),D.6C(),D. In a more realistic network with different populations of neurons, such as one with excitatory and inhibitory connections (Morrison et al., 2007) but with different delay components for distinct sets of connections (e.g., dendritic for short-range connections and axonal for medium-range ones), a combination of synchronization and decorrelation between neurons, depending on their spatial location, may well be obtained.

When input and recurrent connections are both plastic, it is possible to arrive at input selectivity as well as specialization of recurrent connections as described above (Gilson et al., 2010). This requires weight-dependent STDP in order to stabilize the mean weights for both input and recurrent connections, and relates to the fact that all incoming plastic weights compete with each other, irrespectively of their input or recurrent nature, and firing rates may play a role here too. For additive STDP the learning dynamics causes the sets of input and recurrent weights to diverge from each other (Gilson et al., 2010). Splitting of weight distributions, however, is impaired for medium weight dependence (Gütig et al., 2003; Gilson et al., 2010). There is thus a trade-off between stability and competition to obtain proper weight specialization in the sense of separating input weights into distinct groups. In Figure Figure6E,6E, the input structure consisting of the two pools lead the network neurons to organize into two groups in a similar manner to Figure Figure6B,6B, each specialized to only one pool; in addition, the recurrent connections within each neuronal group are strengthened at the expense of the between-group connections, cf. Figure Figure6C6C (). An interesting point here is that such self-organization does not require a prerequisite network topology since STDP alone can cause neurons to separate into two groups and preserve the consistency of the two input pathways. By this we mean that the information of both input pathways is represented by the synaptic structure after learning and processed by the two resulting neuronal groups.

All typical weight evolution scenarios described in this section hold when applying STDP to homogeneous initial weights. The initial weight distribution can also be of importance, at least for additive-like STDP (Gilson et al., 2009a,c); it was observed in numerical simulation that even weak weight dependence can lead to palimpsest behavior, where previous weight specialization is forgotten after some duration of stimulation using uncorrelated inputs (Gilson et al., 2010).

We now consider two types of stimulating input that have been widely used in conjunction with STDP in recurrent networks, viz., pacemaker-like and oscillatory activity. One reason for their success is that these global periodic phenomena (applied on a whole network, not locally) constrain the spike-time correlograms and, consequently, a global trend for the weight evolution can be sketched. For two neurons chosen arbitrarily in a network with homogeneous and partial connectivity, the stimulating signals we just described induce strong neuron-to-neuron correlation with peaks corresponding to the frequency, as illustrated in Figure Figure7.7. We can thus predict the synaptic-weight evolution using the convolution of the STDP learning window *W* and an idealization of such correlograms, in a similar fashion to Figure Figure5,5, since the periodicity overpowers other correlation effects in the recurrent network, even for medium coupling between them.

For a global “pacemaker” activity with a low frequency (below the time scale of STDP, say, 10Hz), the effect of delays is similar to the exposition in Section “Input Versus Recurrent Connections” in that purely dendritic delays lead to an increase of within-group connections (Morrison et al., 2007) whereas purely axonal delays can cause STDP to decouple neurons during population bursts (Lubenov and Siapas, 2008). Comparison between the situations where input/output spike trains are uncorrelated and time-locked (i.e., highly correlated) shows that STDP can behave as a BCM rule (Bienenstock et al., 1982) for a single neuron (Standage et al., 2007). Frequency may play a similar role for oscillations, although this does not appear to have been studied in detail. When the intrinsic properties of neurons subject to oscillations determine a specific phase response that tend to desynchronize these neurons (e.g., positive phase-response curve), a network with axonal delays can become partitioned into groups that have no self-feedback, but connections between some of them (Câteau et al., 2008); cf. Figure Figure66C().

In an all-to-all connected network of heterogeneous oscillators, STDP tends to break coupling between neurons, which can result in asymmetry in the sense of the emergence of feed-forward pathways (Karbowski and Ermentrout, 2002; Masuda and Kori, 2007), in a similar fashion to Figure Figure6D.6D. When this happens, the neuron with highest frequency may end up driving of the rest of the population of oscillators at its own frequency (Takahashi et al., 2009). The propensity of STDP for such time locking is supported by a study of Nowotny et al. (2003b) using a real neuron and a simulated plastic synapse, which showed that STDP can compensate intrinsic neuronal mechanisms to enable synchronization with a stimulating pacemaker. UP and DOWN states of network spiking activity consist in depolarization and hyperpolarization, respectively, for a large portion of the neurons; they can be related to two levels of correlation at the scale of the network. A recurrently connected network with spontaneous UP and DOWN states can organize in a more a feed-forward structure (Kang et al., 2008). Interestingly, the synaptic structure that emerged preserved the transitions between the two states.

Synchronous firing activity has been discussed as a basis of neuronal information, although a comprehensive understanding of such a mechanism is yet to be elucidated. Since STDP is by essence sensitive to temporal coordination in spike trains, its study is an important aspect of such correlation-based neuronal coding. As evidence of its synchronizing properties, STDP has been demonstrated to shape spike-time correlograms both for a single neuron (Song et al., 2000) and within a recurrent network (Morrison et al., 2007), in contrast to other (non-temporally asymmetric) versions of Hebbian learning (Amit and Brunel, 1997). In a population of synapses with varying properties, STDP can perform delay selection (Gerstner et al., 1996; Kempter et al., 2001; Leibold et al., 2002; Senn, 2002). Here we review recent results on the implications at different topological and temporal scales.

At the mesoscopic scale in a recurrent network, the weight specialization as described in Figure Figure6C6C can be related to the increase or decrease of synchronization when the recurrent delays ${d}_{ij}^{\text{ax}}+{d}_{ij}^{\text{den}}$ are small. Depending on the learning parameters, neurons that receive synchronously correlated input tend to reinforce or eliminate their coupling, which then determines the probability of firing at almost coincident times. In this way a number of studies (cited below) have aimed at understanding how the spiking activity of recurrently connected neurons can be constrained by synaptic plasticity. An increase in synchrony arises from the strengthening of within-group recurrent connections when receiving correlated input (Gilson et al., 2009d), cf. Figure Figure6C().6C(). Typical connectivity matrices before and after a learning epoch are represented in Figures Figures8A,B,8A,B, respectively. The corresponding spike-time correlograms in the absence of external stimulation (i.e., intrinsic to the recurrent connectivity) show stronger “coincident” firing within a range of several times the postsynaptic response (here tens of milliseconds) for two neurons that do not have direct connections but belong to the same group, as illustrated in Figure Figure8C.8C. Likewise, when the network receives external stimulation from two correlated input pools as in Figure Figure6C,6C, the neuronal spike-time correlation is higher for stronger recurrent connections, see Figure Figure8D.8D. This can also be related to a reduction in the variability of the neural response for a single neuron due to STDP, as analyzed using information theory techniques (Bohte and Mozer, 2007). Global synchrony has been obtained by repeatedly stimulating recurrently connected neurons with given spike trains, which resulted in the network behaving as a pacemaker. This evolution of network structure is also related to the concept of synfire chains, where neuronal groups successively activate one another within a feed-forward architecture (Hosaka et al., 2008); see Figures Figures8E,F8E,F for an illustrative example with three groups. Similarly, the repeating stimulation of a group of neurons can lead to a synfire chain structure in an initially homogeneous recurrent network provided the divergent growth of outgoing connections due to potentiation by STDP is prevented from taking over the whole network (Jun and Jin, 2007). In contrast, a population of neurons can become decorrelated through synchronous stimulation, as happens when the neurons involved in the burst are not the same for each burst. Then no causal (feed-forward) network structure emerges since synchronization does not involve a growing population of neurons (Lubenov and Siapas, 2008); cf. Figure Figure6C()6C() for different neuronal subgroups of the network at each burst time. In agreement with the theory in Section “Input Versus Recurrent Connections,” the time constants of the STDP learning window can also determine whether the network is constrained to synchronized (e.g., successive firing of several groups) or asynchronous activity (Kitano et al., 2002).

In the case of inhomogeneous delays in a recurrent network, STDP can lead to distributed synchronization over time for neuronal groups at the microscopic scale (Izhikevich, 2006). This concept has been coined as polychronization and consists of neuronal groups whose firing is time locked in accordance to the synaptic delays of the connections between these neurons. In a network, such functional groups of, say, tens of neurons then fire sequences of spikes; note that a neuron can take part in several groups and the synaptic connections for a given group may be cyclic or not. A group is tagged as active when a large portion (e.g., 50%) of its members fires with the corresponding timing. Such self-organization can occur even without external stimulation, but then one crucial feature in obtaining stable functional groups that persist over time is the degree of coupling between individual neurons. Such spike-time precision can also be obtained in parallel to oscillatory spiking activity at the scale of a population of neurons, hence providing two different levels of synchrony (Shen et al., 2008).

The same dynamical ingredients highlighted above have been used to train single neurons and networks for classification and/or detection tasks, many of which involved more elaborate input spike trains than the pools of narrowly correlated spikes considered above (although most of these studies where done using numerical simulations). We briefly review some of these studies as an illustration of applications for the theoretical framework presented above.

Note that learning based on the covariance between firing rates has been used to extract most significant features (in the sense of a principal component analysis) within input stimuli (Sejnowski, 1977; Oja, 1982). In the case of STDP, such features mainly relate to spike-time correlations (van Rossum and Turrigiano, 2001; Gilson et al., 2009a,d). When input spike trains have reliable spike times down to the scale of a millisecond, they convey temporal information that can be picked up by a suitable temporal plasticity rule (Delorme et al., 2001). It has been demonstrated that STDP can train a single neuron to detect a given spike pattern with no specific structure once the pattern is repeatedly presented among noisy spike trains that have similar firing rates (Masquelier et al., 2008). This propensity of STDP to capture spiking information and generate proper input selectivity can explain the storage of sequences of spikes and their retrieval using cues (namely the start of the sequences) using a network with (all-to-all) plastic recurrent connections (Nowotny et al., 2003a). Similarly, patterns relying on oscillatory spiking activity have also been successfully learnt in a recurrent network of oscillatory neurons (Marinaro et al., 2007). STDP can also be used for phase coding in networks with oscillatory activity (Lengyel et al., 2005; Masquelier et al., 2009). Such theoretical studies are further steps toward a better understanding of recent experimental findings with neurons in the auditory pathway known to experience STDP: these neurons can change their spectrum responses after receiving stimulation using combinations of their preferred/non-preferred frequencies (Dahmen et al., 2008).

Self-organizing neural maps provide an interesting example of how networks can build a representation of many input stimuli, though they need not always rely on neuronal characteristics (Kohonen, 1982). STDP has been shown capable of generating such a topological unsupervised organization in a recurrent neuronal network with spatial extension; for example, to detect ITDs (Leibold et al., 2002) and to reproduce an orientation map similar to that observed in the visual cortex (Wenisch et al., 2005). Training lateral (internal recurrent) connections crucially determines the shape of orientation fields in such maps (Bartsch and van Hemmen, 2001). When several sensory neuronal maps have been established, STDP can further learn mappings between these maps (Davison and Frégnac, 2006; Friedel and van Hemmen, 2008), in this way performing multimodal integration of sensory stimuli.

Another abstract concept to learn and detect general spiking signals has appeared recently, which does not rely on an emerging topological organization. For instance, in the liquid-state machine a recurrently connected network behaves as a reservoir that performs many arbitrary operations on the inputs, which allows simple supervised training to discriminate between different classes of input (Maass et al., 2002). Recent studies have shown that STDP applied on the recurrent network can boost the performance of the detection by such a system, by tuning the operations performed by the reservoir, which can be seen as a projection of the input signals onto a large-dimensional space (Henry et al., 2007; Carnell, 2009; Lazar et al., 2009). The resulting information encoding is then distributed, but hidden, in the learned synaptic structure, which can be analyzed in the spiking activity at a fine time scale, e.g., by polychronized groups (Paugam-Moisy et al., 2008). Altogether, STDP is capable of organizing a recurrent neuronal network to exhibit specific spiking activity depending on the presentation of input stimuli, as illustrated in Figure Figure99.

Finally, a number of studies focused on linking STDP to more abstract schemes of processing neuronal (spiking) information. A plasticity rule with probabilistic change for the weights has been found to modulate the speed of learning/forgetting (Fusi, 2002). A similar concept of non-deterministic modification in the weight strength for STDP proved to be fruitful in terms of capturing multi-correlation between input and output spike trains (Appleby and Elliott, 2007). STDP has also been demonstrated to be capable of training a single neuron to perform a broad range of operations for input–output mapping on the spike trains (Legenstein et al., 2005). Recently, STDP has been used to perform an independent component analysis on input signals that mimic retinal influx (Clopath et al., 2010). Using information theory, STDP has been related to optimality in supervised and unsupervised learning (Toyoizumi et al., 2005, 2007; Pfister et al., 2006; Bohte and Mozer, 2007). These contributions are important steps toward a global picture of the functional implications of STDP at a higher level of abstraction.

Spike-timing-dependent plasticity has led to a re-evaluation of our understanding of Hebbian learning, in particular by discriminating between rate-based and spike-based contributions to synaptic plasticity for which temporal causality plays a crucial role. The resulting learning dynamics appears richer than what can be obtained by rate-based plasticity rules, in the sense that STDP alone can generate a mixture of stability and competition on different time scales. For neurons communicate through spikes and not rates, a procedure such as STDP is quite natural, whereas rates are an afterthought.

In a recurrently connected neuronal network, the weight evolution is determined by an interplay between the STDP parameters, neuronal properties, input correlation structure and network topology. The functional implications of the resulting organization, which can be unsupervised or supervised, have been the subject of intense research recently. For both single neurons and recurrent networks, it has been demonstrated how STDP can generate a network structure that accurately reflects the synaptic input representation for a broad range of stimuli, which can lead to neuronal sensory maps or implicit representation in networks. In particular, the study of the emerging (pairwise or higher-order) correlation structure has started to uncover some interesting properties of trained networks that are hypothesized to play an important role in information encoding schemes.

It is not yet clear, however, what underlying algorithm on the stimuli signals is performed through the weight dynamics, and how STDP encodes the input structure into the synaptic weight. This research may establish links between physiological learning mechanisms and the more abstract domain of machine learning, hence expanding our understanding of the functional role of synaptic plasticity in the brain.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Matthieu Gilson was funded by scholarships from the University of Melbourne and NICTA. J. Leo van Hemmen is partially supported by the BCCN–Munich. Funding is acknowledged from the Australian Research Council (ARC Discovery Projects #DP0771815 and #DP1096699). The Bionic Ear Institute acknowledges the support it receives from the Victorian Government through its Operational Infrastructure Support Program.

- Amit D. J., Brunel N. (1997). Dynamics of a recurrent network of spiking neurons before and following learning. Netw. Comput. Neural Syst. 8, 373–40410.1088/0954-898X/8/4/003 [Cross Ref]
- Appleby P. A., Elliott T. (2006). Stable competitive dynamics emerge from multispike interactions in a stochastic model of spike-timing-dependent plasticity. Neural Comput. 18, 2414–246410.1162/neco.2006.18.10.2414 [PubMed] [Cross Ref]
- Appleby P. A., Elliott T. (2007). Multispike interactions in a stochastic model of spike-timing-dependent plasticity. Neural Comput. 19, 1362–139910.1162/neco.2007.19.5.1362 [PubMed] [Cross Ref]
- Bartsch A. P., van Hemmen J. L. (2001). Combined Hebbian development of geniculocortical and lateral connectivity in a model of primary visual cortex. Biol. Cybern. 84, 41–5510.1007/s004220170003 [PubMed] [Cross Ref]
- Bi G. Q., Poo M. M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 [PubMed]
- Bienenstock E. L., Cooper L. N., Munro P. W. (1982). Theory for the development of neuron selectivity – orientation specificity and binocular interaction in visual-cortex. J. Neurosci. 2, 32–48 [PubMed]
- Bohte S. M., Mozer M. C. (2007). Reducing the variability of neural responses: a computational theory of spike-timing-dependent plasticity. Neural Comput. 19, 371–40310.1162/neco.2007.19.2.371 [PubMed] [Cross Ref]
- Burkitt A. N., Gilson M., van Hemmen J. L. (2007). Spike-timing-dependent plasticity for neurons with recurrent connections. Biol. Cybern. 96, 533–54610.1007/s00422-007-0148-2 [PubMed] [Cross Ref]
- Burkitt A. N., Meffin H., Grayden D. B. (2004). Spike-timing-dependent plasticity: the relationship to rate-based learning for models with weight dynamics determined by a stable fixed point. Neural Comput. 16, 885–94010.1162/089976604773135041 [PubMed] [Cross Ref]
- Butts D. A., Kanold P. O., Shatz C. J. (2007). A burst-based “Hebbian” learning rule at retinogeniculate synapses links retinal waves to activity-dependent refinement. PLoS Biol. 5, e61.10.1371/journal.pbio.0050061 [PMC free article] [PubMed] [Cross Ref]
- Caporale N., Dan Y. (2008). Spike timing-dependent plasticity: a Hebbian learning rule. Annu. Rev. Neurosci. 31, 25–4610.1146/annurev.neuro.31.060407.125639 [PubMed] [Cross Ref]
- Carnell A. (2009). An analysis of the use of Hebbian and anti-Hebbian spike time dependent plasticity learning functions within the context of recurrent spiking neural networks. Neurocomputing 72, 685–69210.1016/j.neucom.2008.07.012 [Cross Ref]
- Carr C. E., Konishi M. (1990). A circuit for detection of interaural time differences in the brain-stem of the barn owl. J. Neurosci. 10, 3227–3246 [PubMed]
- Câteau H., Kitano K., Fukai T. (2008). Interplay between a phase response curve and spike-timing-dependent plasticity leading to wireless clustering. Phys. Rev. E 77, 051909.10.1103/PhysRevE.77.051909 [PubMed] [Cross Ref]
- Clopath C., Büsing L., Vasilaki E., Gerstner W. (2010). Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat. Neurosci. 13, 344–35210.1038/nn.2479 [PubMed] [Cross Ref]
- Dahmen J. C., Hartley D. E., King A. J. (2008). Stimulus-timing-dependent plasticity of cortical frequency representation. J. Neurosci. 28, 13629–1363910.1523/JNEUROSCI.4429-08.2008 [PMC free article] [PubMed] [Cross Ref]
- Dan Y., Poo M. M. (2006). Spike timing-dependent plasticity: from synapse to perception. Physiol. Rev. 86, 1033–104810.1152/physrev.00030.2005 [PubMed] [Cross Ref]
- Davison A. P., Frégnac Y. (2006). Learning cross-modal spatial transformations through spike timing-dependent plasticity. J. Neurosci. 26, 5604–561510.1523/JNEUROSCI.5263-05.2006 [PubMed] [Cross Ref]
- Delorme A., Perrinet L., Thorpe S. J. (2001). Networks of integrate-and-fire neurons using rank order coding B: spike timing dependent plasticity and emergence of orientation selectivity. Neurocomputing 38, 539–54510.1016/S0925-2312(01)00403-9 [Cross Ref]
- Friedel P., van Hemmen J. L. (2008). Inhibition, not excitation, is the key to multimodal sensory integration. Biol. Cybern. 98, 597–61810.1007/s00422-008-0236-y [PubMed] [Cross Ref]
- Froemke R. C., Dan Y. (2002). Spike-timing-dependent synaptic modification induced by natural spike trains. Nature 416, 433–43810.1038/416433a [PubMed] [Cross Ref]
- Froemke R. C., Poo M. M., Dan Y. (2005). Spike-timing-dependent synaptic plasticity depends on dendritic location. Nature 434, 221–22510.1038/nature03366 [PubMed] [Cross Ref]
- Froemke R. C., Tsay I. A., Raad M., Long J. D., Dan Y. (2006). Contribution of individual spikes in burst-induced long-term synaptic modification. J. Neurophysiol. 95, 1620–162910.1152/jn.00910.2005 [PubMed] [Cross Ref]
- Fusi S. (2002). Hebbian spike-driven synaptic plasticity for learning patterns of mean firing rates. Biol. Cybern. 87, 459–47010.1007/s00422-002-0356-8 [PubMed] [Cross Ref]
- Gerstner W., Kempter R., van Hemmen J. L., Wagner H. (1996). A neuronal learning rule for sub-millisecond temporal coding. Nature 383, 76–7810.1038/383076a0 [PubMed] [Cross Ref]
- Gerstner W., Kistler W. M. (2002). Mathematical formulations of Hebbian learning. Biol. Cybern. 87, 404–41510.1007/s00422-002-0353-y [PubMed] [Cross Ref]
- Gerstner W., Ritz R., van Hemmen J. L. (1993). Why spikes? Hebbian learning and retrieval of time-resolved excitation patterns. Biol. Cybern. 69, 503–515 [PubMed]
- Gilson M., Burkitt A. N., Grayden D. B., Thomas D. A., van Hemmen J. L. (2009a). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I: input selectivity – strengthening correlated input pathways. Biol. Cybern. 101, 81–10210.1007/s00422-009-0319-4 [PubMed] [Cross Ref]
- Gilson M., Burkitt A. N., Grayden D. B., Thomas D. A., van Hemmen J. L. (2009b). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. II: input selectivity – symmetry breaking. Biol. Cybern. 101, 103–11410.1007/s00422-009-0320-y [PubMed] [Cross Ref]
- Gilson M., Burkitt A. N., Grayden D. B., Thomas D. A., van Hemmen J. L. (2009c). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. III: partially connected neurons driven by spontaneous activity. Biol. Cybern. 101, 411–42610.1007/s00422-009-0343-4 [PubMed] [Cross Ref]
- Gilson M., Burkitt A. N., Grayden D. B., Thomas D. A., van Hemmen J. L. (2009d). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. IV: structuring synaptic pathways among recurrent connections. Biol. Cybern. 101, 427–44410.1007/s00422-009-0346-1 [PubMed] [Cross Ref]
- Gilson M., Burkitt A. N., Grayden D. B., Thomas D. A., van Hemmen J. L. (2010). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. V: self-organization schemes and weight dependence. Biol. Cybern (accepted). [PubMed]
- Graupner M., Brunel N. (2007). STDP in a bistable synapse model based on CaMKII and associated signaling pathways. PLoS Comput. Biol. 3, 2299–232310.1371/journal.pcbi.0030221 [PMC free article] [PubMed] [Cross Ref]
- Gütig R., Aharonov R., Rotter S., Sompolinsky H. (2003). Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. J. Neurosci. 23, 3697–3714 [PubMed]
- Hartley M., Taylor N., Taylor J. (2006). Understanding spike-time-dependent plasticity: a biologically motivated computational model. Neurocomputing 69, 2005–201610.1016/j.neucom.2005.11.021 [Cross Ref]
- Hebb D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. New York, NY, Wiley
- Henry F., Dauce E., Soula H. (2007). Temporal pattern identification using spike-timing dependent plasticity. Neurocomputing 70, 2009–201610.1016/j.neucom.2006.10.082 [Cross Ref]
- Hosaka R., Araki O., Ikeguchi T. (2008). STDP provides the substrate for igniting synfire chains by spatiotemporal input patterns. Neural Comput. 20, 415–43510.1162/neco.2007.11-05-043 [PubMed] [Cross Ref]
- Izhikevich E. M. (2006). Polychronization: computation with spikes. Neural Comput. 18, 245–28210.1162/089976606775093882 [PubMed] [Cross Ref]
- Izhikevich E. M., Desai N. S. (2003). Relating STDP to BCM. Neural Comput. 15, 1511–152310.1162/089976603321891783 [PubMed] [Cross Ref]
- Jun J. K., Jin D. Z. (2007). Development of neural circuitry for precise temporal sequences through spontaneous activity, axon remodeling, and synaptic plasticity. PLoS ONE 2, e723.10.1371/journal.pone.0000723 [PMC free article] [PubMed] [Cross Ref]
- Kang S., Isomura Y., Takekawa T., Câteau H., Fukai T. (2008). Multi-neuronal dynamics in rat neocortex and hippocampus during natural sleep. Neurosci. Res. 61, S189–S189
- Karbowski J., Ermentrout G. B. (2002). Synchrony arising from a balanced synaptic plasticity in a network of heterogeneous neural oscillators. Phys. Rev. E 65, 031902.10.1103/PhysRevE.65.031902 [PubMed] [Cross Ref]
- Kempter R., Gerstner W., van Hemmen J. L. (1999). Hebbian learning and spiking neurons. Phys. Rev. E 59, 4498–451410.1103/PhysRevE.59.4498 [Cross Ref]
- Kempter R., Gerstner W., van Hemmen J. L. (2001). Intrinsic stabilization of output rates by spike-based Hebbian learning. Neural Comput. 13, 2709–274110.1162/089976601317098501 [PubMed] [Cross Ref]
- Kistler W. M., van Hemmen J. L. (2000). Modeling synaptic plasticity in conjunction with the timing of pre- and postsynaptic action potentials. Neural Comput. 12, 385–40510.1162/089976600300015844 [PubMed] [Cross Ref]
- Kitano K., Câteau H., Fukai T. (2002). Self-organization of memory activity through spike-timing-dependent plasticity. Neuroreport 13, 795–79810.1097/00001756-200205070-00012 [PubMed] [Cross Ref]
- Kohonen T. (1982). Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–6910.1007/BF00337288 [Cross Ref]
- Konishi M. (1993). Listening with two ears. Sci. Am. 268, 66–7310.1038/scientificamerican0493-66 [PubMed] [Cross Ref]
- Krumin M., Shoham S. (2009). Generation of spike trains with controlled auto- and cross-correlation functions. Neural Comput. 21, 1642–166410.1162/neco.2009.08-08-847 [PubMed] [Cross Ref]
- Lazar A., Pipa G., Triesch J. (2009). SORN: a self-organizing recurrent neural network. Front. Comput. Neurosci. 3:23.10.3389/neuro.10.023.2009 [PMC free article] [PubMed] [Cross Ref]
- Legenstein R., Naeger C., Maass W. (2005). What can a neuron learn with spike-timing-dependent plasticity? Neural Comput. 17, 2337–238210.1162/0899766054796888 [PubMed] [Cross Ref]
- Leibold C., Kempter R., van Hemmen J. L. (2002). How spiking neurons give rise to a temporal-feature map: from synaptic plasticity to axonal selection. Phys. Rev. E 65, 051915.10.1103/PhysRevE.65.051915 [PubMed] [Cross Ref]
- Lengyel M., Kwag J., Paulsen O., Dayan P. (2005). Matching storage and recall: hippocampal spike timing-dependent plasticity and phase response curves. Nat. Neurosci. 8, 1677–168310.1038/nn1561 [PubMed] [Cross Ref]
- Levy W. B., Steward O. (1983). Temporal contiguity requirements for long-term associative potentiation depression in the hippocampus. Neuroscience 8, 791–79710.1016/0306-4522(83)90010-6 [PubMed] [Cross Ref]
- Lubenov E. V., Siapas A. G. (2008). Decoupling through synchrony in neuronal circuits with propagation delays. Neuron 58, 118–13110.1016/j.neuron.2008.01.036 [PubMed] [Cross Ref]
- Maass W., Natschlager T., Markram H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–256010.1162/089976602760407955 [PubMed] [Cross Ref]
- Macke J. H., Berens P., Ecker A. S., Tolias A. S., Bethge M. (2009). Generating spike trains with specified correlation coefficients. Neural Comput. 21, 397–42310.1162/neco.2008.02-08-713 [PubMed] [Cross Ref]
- Marinaro M., Scarpetta S., Yoshioka M. (2007). Learning of oscillatory correlated patterns in a cortical network by a STDP-based learning rule. Math. Biosci. 207, 322–33510.1016/j.mbs.2006.10.001 [PubMed] [Cross Ref]
- Markram H., Lübke J., Frotscher M., Sakmann B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–21510.1126/science.275.5297.213 [PubMed] [Cross Ref]
- Masquelier T., Guyonneau R., Thorpe S. J. (2008). Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains. PLoS ONE 3, e1377.10.1371/journal.pone.0001377 [PMC free article] [PubMed] [Cross Ref]
- Masquelier T., Hugues E., Deco G., Thorpe S. J. (2009). Oscillations, phase-of-firing coding, and spike-timing-dependent plasticity: an efficient learning scheme. J. Neurosci. 29, 13484–1349310.1523/JNEUROSCI.2207-09.2009 [PubMed] [Cross Ref]
- Masuda N., Kori H. (2007). Formation of feedforward networks and frequency synchrony by spike-timing-dependent plasticity. J. Comput. Neurosci. 22, 327–34510.1007/s10827-007-0022-1 [PubMed] [Cross Ref]
- Meffin H., Besson J., Burkitt A. N., Grayden D. B. (2006). Learning the structure of correlated synaptic subgroups using stable and competitive spike-timing-dependent plasticity. Phys. Rev. E 73, 041911.10.1103/PhysRevE.73.041911 [PubMed] [Cross Ref]
- Moreno-Bote R., Renart A., Parga N. (2008). Theory of input spike auto-and cross-correlations and their effect on the response of spiking neurons. Neural Comput. 20, 1651–170510.1162/neco.2008.03-07-497 [PubMed] [Cross Ref]
- Morrison A., Aertsen A., Diesmann M. (2007). Spike-timing-dependent plasticity in balanced random networks. Neural Comput. 19, 1437–146710.1162/neco.2007.19.6.1437 [PubMed] [Cross Ref]
- Morrison A., Diesmann M., Gerstner W. (2008). Phenomenological models of synaptic plasticity based on spike timing. Biol. Cybern. 98, 459–47810.1007/s00422-008-0233-1 [PMC free article] [PubMed] [Cross Ref]
- Nowotny T., Rabinovich M. I., Abarbanel H. D. I. (2003a). Spatial representation of temporal information through spike-timing-dependent plasticity. Phys. Rev. E 68, 011908.10.1103/PhysRevE.68.011908 [PubMed] [Cross Ref]
- Nowotny T., Zhigulin V. P., Selverston A. I., Abarbanel H. D. I., Rabinovich M. I. (2003b). Enhancement of synchronization in a hybrid neural circuit by spike-timing dependent plasticity. J. Neurosci. 23, 9776–9785 [PubMed]
- Oja E. (1982). A simplified neuron model as a principal component analyzer. J. Math. Biol. 15, 267–27310.1007/BF00275687 [PubMed] [Cross Ref]
- Paugam-Moisy H., Martinez R., Bengio S. (2008). Delay learning and polychronization for reservoir computing. Neurocomputing 71, 1143–115810.1016/j.neucom.2007.12.027 [Cross Ref]
- Pfister J.-P., Gerstner W. (2006). Triplets of spikes in a model of spike timing-dependent plasticity. J. Neurosci. 26, 9673–968210.1523/JNEUROSCI.1425-06.2006 [PubMed] [Cross Ref]
- Pfister J.-P., Toyoizumi T., Barber D., Gerstner W. (2006). Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Comput. 18, 1318–134810.1162/neco.2006.18.6.1318 [PubMed] [Cross Ref]
- Sejnowski T. J. (1977). Storing covariance with nonlinearly interacting neurons. J. Math. Biol. 4, 303–32110.1007/BF00275079 [PubMed] [Cross Ref]
- Senn W. (2002). Beyond spike timing: the role of nonlinear plasticity and unreliable synapses. Biol. Cybern. 87, 344–35510.1007/s00422-002-0350-1 [PubMed] [Cross Ref]
- Shen X., Lin X. B., De Wilde P. (2008). Oscillations and spiking pairs: behavior of a neuronal model with STDP learning. Neural Comput. 20, 2037–206910.1162/neco.2008.08-06-317 [PubMed] [Cross Ref]
- Sjöström P. J., Turrigiano G. G., Nelson S. B. (2001). Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 32, 1149–116410.1016/S0896-6273(01)00542-6 [PubMed] [Cross Ref]
- Song S., Abbott L. F. (2001). Cortical development and remapping through spike timing-dependent plasticity. Neuron 32, 339–35010.1016/S0896-6273(01)00451-2 [PubMed] [Cross Ref]
- Song S., Miller K. D., Abbott L. F. (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 3, 919–92610.1038/78829 [PubMed] [Cross Ref]
- Sprekeler H., Michaelis C., Wiskott L. (2007). Slowness: an objective for spike-timing-dependent plasticity? PLoS Comput. Biol. 3, 1136–114810.1371/journal.pcbi.0030112 [PMC free article] [PubMed] [Cross Ref]
- Standage D., Jalil S., Trappenberg T. (2007). Computational consequences of experimentally derived spike-time and weight dependent plasticity rules. Biol. Cybern. 96, 615–62310.1007/s00422-007-0152-6 [PubMed] [Cross Ref]
- Staude B., Rotter S., Grun S. (2008). Can spike coordination be differentiated from rate covariation? Neural Comput. 20, 1973–199910.1162/neco.2008.06-07-550 [PubMed] [Cross Ref]
- Stuart G., Spruston N., Sakmann B., Häusser M. (1997). Action potential initiation and backpropagation in neurons of the mammalian CNS. Trends Neurosci. 20, 125–13110.1016/S0166-2236(96)10075-8 [PubMed] [Cross Ref]
- Takahashi Y. K., Kori H., Masuda N. (2009). Self-organization of feed-forward structure and entrainment in excitatory neural networks with spike-timing-dependent plasticity. Phys. Rev. E 79, 051904.10.1103/PhysRevE.79.051904 [PubMed] [Cross Ref]
- Toyoizumi T., Pfister J.-P., Aihara K., Gerstner W. (2005). Generalized Bienenstock–Cooper–Munro rule for spiking neurons that maximizes information transmission. Proc. Natl. Acad. Sci. U.S.A. 102, 5239–524410.1073/pnas.0500495102 [PubMed] [Cross Ref]
- Toyoizumi T., Pfister J.-P., Aihara K., Gerstner W. (2007). Optimality model of unsupervised spike-timing-dependent plasticity: synaptic memory and weight distribution. Neural Comput. 19, 639–67110.1162/neco.2007.19.3.639 [PubMed] [Cross Ref]
- Turrigiano G. G. (2008). The self-tuning neuron: synaptic scaling of excitatory synapses. Cell 135, 422–435 [PMC free article] [PubMed]
- Tzounopoulos T., Kim Y., Oertel D., Trussell L. O. (2004). Cell-specific, spike timing-dependent plasticities in the dorsal cochlear nucleus. Nat. Neurosci. 7, 719–72510.1038/nn1272 [PubMed] [Cross Ref]
- van Hemmen J. L. (2001). “Theory of synaptic plasticity (Chapter 18),” in Handbook of Biological Physics, Vol. 4,
*Neuro-informatics and Neural Modelling*, eds Moss F., Gielen S., editors. (Amsterdam: Elsevier; ), 771–823 - van Rossum M. C. W., Bi G. Q., Turrigiano G. G. (2000). Stable Hebbian learning from spike timing-dependent plasticity. J. Neurosci. 20, 8812–8821 [PubMed]
- van Rossum M. C. W., Turrigiano G. G. (2001). Correlation based learning from spike timing dependent plasticity. Neurocomputing 38, 409–41510.1016/S0925-2312(01)00360-5 [Cross Ref]
- Wenisch O. G., Noll J., van Hemmen J. L. (2005). Spontaneously emerging direction selectivity maps in visual cortex through STDP. Biol. Cybern. 93, 239–24710.1007/s00422-005-0006-z [PubMed] [Cross Ref]
- Woodin M. A., Ganguly K., Poo M. M. (2003). Coincident pre- and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl
^{−}transporter activity. Neuron 39, 807–82010.1016/S0896-6273(03)00507-5 [PubMed] [Cross Ref] - Zou Q., Destexhe A. (2007). Kinetic models of spike-timing dependent plasticity and their functional consequences in detecting correlations. Biol. Cybern. 97, 81–9710.1007/s00422-007-0155-3 [PubMed] [Cross Ref]

Articles from Frontiers in Computational Neuroscience are provided here courtesy of **Frontiers Media SA**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |