Home | About | Journals | Submit | Contact Us | Français |

**|**Front Comput Neurosci**|**v.4; 2010**|**PMC2998049

Formats

Article sections

Authors

Related links

Front Comput Neurosci. 2010; 4: 134.

PMCID: PMC2998049

Christoph Kolodziejski,^{1,}^{2,}^{3,}^{*} Christian Tetzlaff,^{1,}^{2,}^{3} and Florentin Wörgötter^{1,}^{2}

Edited by: Wulfram Gerstner, Ecole Polytechnique Fédérale de Lausanne, Switzerland

Reviewed by: Anthony Burkitt, University of Melbourne, Australia; Jean-Pascal Pfister, University of Cambridge, UK

*Correspondence: Christoph Kolodziejski, Network Dynamics Group, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany. e-mail: ed.negnitteog-nccb@olok

Received 2010 February 23; Accepted 2010 August 23.

Copyright © 2010 Kolodziejski, Tetzlaff and Wörgötter.

This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

This article has been cited by other articles in PMC.

Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures.

At the network level recent theoretical advances have made it possible to analytically calculate the activity patterns for certain types of simplified neuron models as long as one keeps the synaptic connectivity fixed (Memmesheimer and Timme, 2006a,b). Concerning plasticity, the situation is reversed. When fixing the activity pattern or rather, when considering long-term activity averages, it is for many plasticity rules possible to calculate the overall development of synaptic weights. Older results exist which provide specific solutions for estimating synaptic strengths in certain types of networks and learning rules, all of which however need to constrain structure or dynamics of the system in different ways (Hopfield, 1982; Miller and MacKay, 1994; Roberts, 2000; van Rossum et al., 2000; Kempter et al., 2001; Burkitt et al., 2007; Kolodziejski et al., 2008) and many of them do not resemble spike-timing-dependent plasticity (STDP, Magee and Johnston, 1997; Markram et al., 1997). Concerning STDP, in large networks one finds that over longer times the distribution of synaptic efficiencies usually develops into either an unimodal (van Rossum et al., 2000; Gütig et al., 2003) or a bimodal (Song et al., 2000; Izhikevich et al., 2004) shape. These results mainly rely on simulations (Song et al., 2000; van Rossum et al., 2000) or mean-field approximations (van Rossum et al., 2000; Gütig et al., 2003). Additionally, in order to keep the weights beyond a given value, soft (van Rossum et al., 2000; Morrison et al., 2007) or hard (Song et al., 2000; Izhikevich et al., 2004; Burkitt et al., 2007; Lubenov and Siapas, 2008; Gilson et al., 2009; Clopath et al., 2010) boundaries have to be introduced.

All these observations pose a problem, because STDP can generically lead to the situation that *individual* pre- and post-synaptic spike pairs influence the synaptic strength of their connection and thereby in a recurrent way immediately also the activity in the network. The question arises, thus, whether such fast synaptic changes would carry any functional significance or whether they will be averaged out by the ongoing network activity? At the moment this issue is left open because resolving it would require measuring or calculating the step-by-step changes of synaptic connectivity together with their activity for a large number of neurons and ideally for every spike pair.

Experimentally this is at the moment still impossible and so far there are also no theoretical tools available (apart from simulations) to address the issue of ongoing mutual interactions between activity and plasticity in networks. For this, analytical methods would be particularly helpful with which it would be possible to predict the dynamics of synaptic connections under STDP.

As discussed above, this is typically investigated with mean-field methods (van Rossum et al., 2000; Gütig et al., 2003) that have average values in their focus. However, here we would like to examine the fine temporal dynamics of each weight as, fundamentally, the concept of STDP needs to rely on temporally local (pulse–pulse) interactions. Thus, in the current study we will start addressing the issue of temporal locality by providing a general analytical framework for calculating plasticity in a time-continuous way in networks with an arbitrary number of synapses using Hebbian plasticity rules. These are, for example, plain Hebbian or differential Hebbian learning, where the latter is known to have properties similar to STDP (Roberts, 1999; Wörgötter and Porr, 2004).

Naturally, the complexity of the solution is high and, while it could be used to investigate the interplay of activity and plasticity in any network topology, we will here first look at three, still rather simple, cases of recurrent networks, which have been chosen because they could be considered as fundamental network building blocks. This allows us to arrive at several interesting observations which may be the starting point for the investigation of more complex topologies of which we will give a brief example at the end.

For example, we find that in the investigated simple recurrent networks many fixed points exist where synapses stabilize in spite of the recurrently arriving inputs and that boundaries (soft or hard) are not required. More intriguing, this can lead to the situation that such a network can better stabilize when there is an asymmetric STDP curve in which the long-term potentiation (LTP) part dominates.

We will address these topics by organizing the paper in the following way. In Section “Materials and Methods” we define the system we are going to solve analytically, depict the three different network structures the analytical solution is applied to and also describe the specifics of the recurrent connections. In Section “Results” we provide the analytical solution of our general Hebbian system and also verify several useful approximations. Next we use our solution to calculate all configurations in the investigated network structures that lead to non-divergent weights. Here, we describe the results under different conditions, i.e., with/out noise and with an asymmetric STDP rule. Finally, in Section “Discussion”, we put the results to a broader context. Detailed calculations can be found in Section “Appendix”.

The general one-neuron system used as the basis of this study consists of *N* synapses with strength ω_{i} that receive input from neurons *i* with continuous values *x*_{i}. Each input produces an excitatory post-synaptic potential (EPSP) which is modeled by filter functions *h*_{i} (see Figure Figure1A1A for an example). The output of the neuron is, thus:

$$v(t)={\displaystyle \sum _{i=1}^{N}\left({x}_{i}\ast {h}_{i}\right)(t)\cdot {\omega}_{i}(t)}$$

(1)

where $(\xi \ast \eta )(t)={\int}_{0}^{\infty}\xi (\tau )\eta (t-\tau )d\tau $ describes a convolution.

Synapses change according to a generally formalized Hebbian plasticity rule

$${\dot{\omega}}_{i}(t):=\frac{d{\omega}_{i}(t)}{dt}=\mu F\left[{x}_{i}\ast {h}_{i}\right](t)G[v](t)$$

(2)

where μ (set to 0.01 throughout the article) is the plasticity rate and *F*[ξ] and *G*[ξ] are linear functionals. However, although the analytical solution that we will present later on holds for the generally formalized plasticity rule (Eq. 2), we will only consider *F*=1 (where 1 is the identity) and *G*= *d*/*dt*. This is called differential Hebbian learning (Kosco, 1986; Klopf, 1988) and allows for the learning of temporal sequences of input events (Porr and Wörgötter, 2003). It resembles STDP (Roberts, 1999; Wörgötter and Porr, 2004). Another important setting for *F* and *G* is conventional Hebbian learning with *F*=*G*=1.

To avoid that weight changes will follow spurious random correlations one generally assumes that learning is a slow process, where inputs change much faster than weights, with *d*ω_{i}/ω_{i}<<*d*(*x*_{i} * *h*_{i})/*x*_{i}* *h*_{i}, μ→ 0. In spite of this separation of time scales, such a simplification makes it still possible to investigate the dynamics of each weight separately. The separation simplifies Eq. 2 and we neglect all temporal derivatives of ω_{i} on the right hand side which leads to ${\dot{\omega}}_{i}(t)=\mu F\text{\hspace{0.17em}\hspace{0.05em}}[{x}_{i}\ast {h}_{i}](t){\Sigma}_{j=1}^{N}{\omega}_{j}(t)G[{x}_{j}\ast {h}_{j}](t)$ where we used $G\text{\hspace{0.17em}}[{\Sigma}_{j}{\xi}_{j}]={\Sigma}_{j}G\text{\hspace{0.17em}}[{\xi}_{j}]$ as *G*[ξ] is linear.

If we take ω_{i} as the *i*-th component of a vector **ω**, we write its most general form

$$\dot{\mathbf{\omega}}(t)=\mu A(t)\mathbf{\omega}(t),$$

(3)

with *A*_{ij}(*t*) =*F*[*x*_{i}**h*_{i}](*t*)*G*[*x*_{j}**h*_{j}](*t*) or in matrix form $A(t)=F[x\ast h](t)\cdot G[\overline{x\ast h}](t)$ where $\overline{\mathbf{\xi}}$ denotes the transposition of matrix **ξ**. In the results section we will show that following closed-form solution is possible

$$\mathbf{\omega}(t)=\mathfrak{B}(t)\mathbf{\omega}({t}_{0}),$$

(4)

however, $\mathfrak{B}(t)$ as such is quite complex. Nevertheless, we will also show that for our quasi-static assumption μ→0 the approximation $\mathfrak{B(}t)=I+\mu {\int}_{{t}_{0}}^{t}A(\tau )d\tau ,$ where ** I** is the identity matrix, is arguable.

In order to show that *F*=1 and *G*=*d*/*dt* resembles STDP we use a simplified version of the system, namely a system with *N*=2 (see Figure Figure1C),1C), however, where only one of the synapses is plastic (ω_{1}); the other stays fixed (ω_{2}=1). In order to get the correct STDP protocol, we now present two pulses to this system, one at *t* =0 to the input connected with the plastic synapse, hence a pre-synaptic spike, and another at *t*=*T* to the other input. As we are using a linear model and as the corresponding weight ω_{2} is set to a fixed value of 1, a pulse at the input is also a pulse at the output, hence creating the *pendant* of a post-synaptic spike. The equivalence of relating differential Hebbian plasticity with STDP arises essentially from this and its implications are discussed in detail in Saudargiene et al. (2004) and Tamosiunaite et al. (2007). Recently also Clopath et al. (2010) related voltage-based Hebbian learning to STDP.

For such a network structure, it is possible to analytically calculate the shape of the STDP curve. We start with Eq. 3 where we focus on the first entry of ω(*t*): ω_{1}(*t*). The second synapse stays fixed: ω_{2}(*t*)=ω_{2}.

$$\begin{array}{c}{\dot{\omega}}_{1}(t)=\mu \cdot \left({x}_{1}\ast {h}_{1}\right)(t)\frac{d}{dt}\left({x}_{1}\ast {h}_{1}\right)(t)\cdot {\omega}_{1}(t)\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.05em}}+\mu \cdot \left({x}_{1}\ast {h}_{1}\right)(t)\frac{d}{dt}\left({x}_{2}\ast {h}_{2}\right)(t)\cdot {\omega}_{2}\end{array}$$

(5)

First, we calculate the influence of a single pulse on the weight ω_{1}. To this end we set *x*_{2}(*t*)=0 for all *t* and *x*_{1}(*t*)=δ(*t*) which is a spike at *t*=0. It simplifies the convolution to a temporal shift in the filter function $h\text{\hspace{0.05em}}:h(t-{t}_{i})={\int}_{0}^{\infty}\delta (t-{t}_{i}-\tau )h(\tau )d\tau .$ This leaves us with a first order differential equation with following solution:

$$\begin{array}{c}{\omega}_{1}^{s}(t)={\omega}_{1}^{s}\left({t}_{0}\right)\text{exp}\left(\frac{1}{2}\mu {h}_{1}^{2}(t)\right)\\ \Rightarrow \Delta {\omega}_{1}^{s}(t)={\omega}_{1}^{s}(t)-{\omega}_{1}^{s}\left({t}_{0}\right)={\omega}_{1}^{s}\left({t}_{0}\right)\left(\text{exp}\left(\frac{1}{2}\mu {h}_{1}^{2}(t)\right)-1\right)\end{array}$$

(6)

Filters usually have the property that they decay to 0 after a while which turns the exponential function into 1 and results in no weight change. Thus, a single pulse or a rate produced by a single input does not have any influence on the weight when not combined with another pulse. For these pulse–pulse correlations we have to set the other input to have a pulse at *t*= *T*: *x*_{2}(*t*)=δ(*t*− *T*). If we use a simplification (with all the details in Kolodziejski et al., 2008) the result of the differential Eq. 5 for very long times writes

$$\begin{array}{c}\Delta \omega =\underset{t\to \infty}{\mathrm{lim}}\Delta {\omega}_{1}(t)\approx \underset{=0}{\underbrace{\underset{t\to \infty}{\mathrm{lim}}\Delta {\omega}_{1}^{s}(t)}}+\mu {\omega}_{2}{\displaystyle {\int}_{0}^{\infty}h(\tau )\dot{h}(\tau -T)d\tau}\\ ={\omega}_{2}\text{sign}(T)\frac{\beta -\alpha}{2(\alpha +\beta )\sigma}h\left(\left|T\right|\right)\end{array}$$

(7)

where we used here and throughout the manuscript

$$h(t)=\frac{1}{\sigma}\left({\text{e}}^{-\alpha t}-{\text{e}}^{-\beta t}\right)\Theta (t),$$

(8)

Hence, different values of the pulse timing *T* lead to different Δω values, which, when plotted in Figure Figure1B1B against *T*, resemble a fully symmetrical STDP curve (for more details see Porr and Wörgötter, 2003; Saudargiene et al., 2004; Tamosiunaite et al., 2007; Kolodziejski et al., 2008). The shape and symmetry of the STDP curve depends purely on the filters’ shape, thus on the post-synaptic potentials’ (PSPs) shapes (see Figure Figure1A).1A). Thus, with different filters *h* for *x*_{1} and for *x*_{2} one could get either LTP or long-term depression (LTD)-dominated curves (see Figure 2c in Porr and Wörgötter, 2003).

We used *h*(*t*) with parameters α =0.009, β =0.0099, and σ =0.029. The maximum of *h(t)* is at *t*_{max} =(log(β)−log(α))/(β−α). The width of the STDP curve measured at biological synapses is usually about *T* =±50ms (Bi and Poo, 1998; Caporale and Dan, 2008). Thus, as our filter ceases to 10^{−4} at around *t*≈ 1000, we define 1000 time steps as 50ms (which corresponds to α=0.18ms^{−1} and β=0.198ms^{−1}) to which we will refer throughout the text.

Not only the shape of the STDP curve is determined by the filter function (Eq. 8) but also the shape of the pulses. As our model is linear, each pulse could be thought of as a weighted delta function (pseudo spike) convoluted with the filter function.

Although our method would allow investigating general network structures, we will concentrate on only three very simple, but fundamental ones.

The first structure consists of a single neuron with two plastic synapses (see Figure Figure1D).1D). Although quite simple, the analytical solution of this feed-forward structure demonstrates the power of the method developed in this paper. The next structure is a small network with two neurons and only one plastic synapse (see Figure Figure2A).2A). This network structure is equivalent to just one neuron with a plastic recurrent connection of total delay *d*. This is due to our assumption of linearity. The last structure we consider is a network with three neurons and two plastic synapses (see Figure Figure2C).2C). Here the equivalence is to a single neuron with two plastic recurrent connections. Note, these two structures could be considered as part of a larger recurrent network, where – after some intermittent stages – activity arrives back at its origin (with unchanging synapses in between). Hence, recurrent structure like those could be seen as simple network building blocks and in the end we shortly discuss larger network structures (i.e., a ring of neurons) that have similar stability properties as those building blocks.

For the input we will use input spike distributions with first moment *P* (expectation value) that comes in three different conditions. The firing will be without any noise and thus, periodic, with Gaussian noise using a mean value of *P* and a standard deviation of σ=5ms and, additionally, we will also apply a purely Poisson distributed firing with rate *P* (hence no periodic input).

As our model is linear, it can not as such produce spikes on its own. However, we could at any point introduce a threshold so that each time a pulse exceeds this threshold a spike would travel along the axon to the next neuron via its synapses. In our first order approximation we do not need this threshold, however, we could apply it afterward.

The external input to the system is a sequence of δ-function spikes, which are being transformed into PSPs by convolving the normalized filter function *h* with the input sequence. As the model is linear, outputs are graded pulses, which result from the weighted linear summation of all inputs at the soma of the neuron (Eq. 1). As we do not use spike thresholds, these outputs directly provide the recurrent part of the input to the system.

We need to calculate the complete input time function, notable the timing and amplitude of all pulses that enter a neuron. We investigate recurrent systems with an external input of periodicity *P* with delays *d*_{i}, *i*=1,…,*R* (in this study we only consider *R*=1 and 2 but the analysis holds for *R*). To get a better intuition for recurrences in networks like those described above (Figures (Figures2A,C)2A,C) we can compare each recurrence to a modulo operation. We find all pulse timings *pt* by iterating the map *pt*_{c+1}= mod (*pt*_{c}+ *d*_{i}, *P*) until $p{t}_{c+1={N}_{s}}=p{t}_{0}.$ The total number of relevant pulse times *N*_{s}= *P*/gcd(*P*,*d*_{1},…,*d*_{R}) depends on the greatest common divisor (gcd) of *P* and all *R* delays *d*_{i}. For instance, all relevant pulse times for *P*=75 and *d*=60 are *pt*_{c}= {0, 60, 45, 30, 15}. This gives us the timings of the pulses. In order to calculate the amplitude **Γ** of each pulse of a system with *R* recurrent connections, we need to go further and solve this linear system of equations: **Γ**= (** I**−

Here we are going to develop a closed-form for the matrix 𝕭(*t*) which governs the temporal development (Eq. 4) of the weights **ω**(*t*) according to Eq. 3. This solution is not trivial as the matrix ** A**(

$$\omega (t)=\mathrm{exp}\mathbf{\Omega}(t)\cdot \mathbf{\omega}\left({t}_{0}\right)$$

(9)

where **ω**(*t*_{0}) are the synaptic strengths at time *t*_{0}, hence before plasticity, and **Ω**(*t*) is the solution of following equation

$$\dot{\mathbf{\Omega}}(t)=\left\{\mu A(t),\frac{\mathbf{\Omega}(t)}{1-\mathrm{exp}(-\mathbf{\Omega}(t))}\right\}={\displaystyle \sum _{n=0}^{\infty}{\beta}_{n}\left\{A,{\mathbf{\Omega}}^{n}\right\}}.$$

(10)

Here the braces $\left\{\eta ,{\xi}^{n}\right\}=[\cdots [[\eta ,\xi ],\xi ]\cdots \xi ]$ are nested commutators $\left[\eta ,\xi \right]=\eta \xi -\xi \eta $ and β_{n} are the coefficients of the Taylor expansion of **Ω**/(1− exp(−**Ω**)) around **Ω**=0. Equation 10 is solved through integration by iteration to the Magnus series:

$$\begin{array}{c}\mathbf{\Omega}(t)=\mu \mathfrak{A}(t)+\frac{{\mu}^{2}}{2}{\displaystyle {\int}_{0}^{t}[A(\tau ),\mathfrak{A}(\tau )]}\text{\hspace{0.05em}\hspace{0.05em}}d\tau \\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.05em}}+\frac{{\mu}^{3}}{4}{\displaystyle {\int}_{0}^{t}\left[A(\tau ),{\displaystyle {\int}_{0}^{\tau}[A(\sigma ),\mathfrak{A}(\sigma )]d\sigma}\right]}\text{\hspace{0.05em}\hspace{0.05em}}d\tau \\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.05em}}+\frac{{\mu}^{3}}{12}{\displaystyle {\int}_{0}^{t}\left[[A(t),\mathfrak{A}(\tau )],\mathfrak{A}(\tau )\right]d\tau +o({\mu}^{4})}\end{array}$$

(11)

with $\mathfrak{A}(t)={\int}_{0}^{t}A(\tau )d\tau .$ Thus, Eq. 9 combined with Eq. 11, gives us analytically the time development of all weights connected to a neuron under Hebbian plasticity in the limit of small plasticity rates μ. With this we are able to calculate without simulations in principle directly the synaptic strengths of *N* synapses given *N* different pulse trains. This property will be useful to analytically calculate the fixed-point values of the weights of our small network structures.

Before we apply the solution to our network structures, we transform it into a computable form and provide error estimates. As the commutators in the infinite series in Eq. 11 are generally non-zero we truncate the series and neglect iterations above degree (*k*). We write the truncated solution as ${\mathfrak{B}}_{E,(k)}(t)=\mathrm{exp}{\mathbf{\Omega}}_{(k)}(t).$ For two synapses this is solved directly later on, most often, however this needs to be calculated by expanding the exponential function. We denote this approximation, where we neglect terms of both the exponential as well as the Magnus series greater than of order (*k*), with an *S* instead of an *E*, i.e., *S*,(*k*), thus ${\mathfrak{B}}_{S,(k)}(t)=(I+{\Sigma}_{p=2,q=1}^{p\cdot q\le k}1/q\text{\hspace{0.17em}}.\text{\hspace{0.05em}}{({\mathbf{\Omega}}_{(p)}(t))}^{q}).$ Notice that in the limit *k* →∞ the approximation transforms into the general solution (Eq. 9). This solution is computable for arbitrary input patterns.

Now, as we know the complete analytical solution of Eq. 3, we investigate approximations and their errors in order to judge their usefulness for further considerations. To estimate the approximation errors we can use δ functions as inputs to the system. Furthermore, we assume that all *h*_{i} =*h* are equal. The pulses are again modeled as δ functions δ(*t* −*t*_{i}) for times *t*_{i} which simplifies the convolution to a temporal shift in the filter function $h\text{\hspace{0.05em}}:h(t-{t}_{i})={\int}_{0}^{\infty}\delta (t-{t}_{i}-\tau )h(\tau )d\tau .$ This leads for elements of ** A**(

The different approximation errors are exemplified in Figure Figure1E.1E. For this we are using a single pulse pair at two synapses (see Figure Figure1C)1C) for which we calculate the final synaptic strength $\tilde{\mathbf{\omega}}=\underset{\tau \to \infty}{\mathrm{lim}}\mathbf{\omega}(\tau )$ (Eq. 9). This has been performed for differential Hebbian learning, but we remark that the error is identical for Hebbian learning. For this setup, weight changes are computed in three ways: without any approximations, yielding $\tilde{\mathbf{\omega}}$ (Eqs. 9 and 11); using the truncated solution only, yielding ${\tilde{\mathbf{\omega}}}_{E,(k)};$ and using the truncated solution while also expanding the exponential function, yielding ${\tilde{\mathbf{\omega}}}_{S,(k)}.$ Thus, we use $\tilde{\mathbf{\omega}}$ and compare it to approximations ${\tilde{\mathbf{\omega}}}_{\cdot ,(k)},$ calculating the error as: ${\Delta}_{\cdot ,(k)}=\left|{\tilde{\mathbf{\omega}}}_{\cdot ,(k)}-\tilde{\mathbf{\omega}}\right|.$ This is plotted in Figure Figure1E1E for different approximations against the plasticity rate μ on a log-log scale where we set the maximal value of the input filter *h* to 1. As approximations *E*,(*k*) and *S*,(*k*) become very similar for *k*>2, only four curves are shown. We observe that the behavior of the difference-error Δ_{·,(k)} follows the order of the approximation used. The error for the linear expansion approximation *S*,(*k*=2) is slightly higher than that from its corresponding truncation approximation *E*,(*k* =2). However, using a plasticity rate of μ=0.001 already results in a difference-error value of 10^{−8} compared to 10^{−2} when using the same value for μ and the maximum of *h*. Therefore, one can in most applications use even the simplest possible linear approximation *S*,(*k*=2) to calculate the change in synaptic strength.

As this calculation has been based on two pulses at two synapses only, we need to ask how the error develops when using *N* synapses and complex pulse trains. For this we first consider pulse trains, which are grouped “vertically” into groups with each input firing at most once. Filters of pulses within a group will overlap but we assume that grouping is possible such that adjacent groups are spaced with a temporal distance sufficient to prevent overlap between filter responses of temporally adjacent groups. Thus we calculate **ω** in the same way as above leading to: ${\tilde{\mathfrak{B}}}_{\cdot ,(k)}=\underset{\tau \to \infty}{\mathrm{lim}}{\mathfrak{B}}_{\cdot ,(k)}(\tau ).$ When using such a temporal tiling, ${\tilde{\mathfrak{B}}}_{E,(k)};$ depends only on the pulse timing matrix ** T** with elements

This solution is easy to compute. Because a product of matrices results in a summation of matrix elements, the error does not increase exponentially but only linearly in *M*. Because of this it follows that even after 10000 pulses the error is still of an order of only 10^{−4} given the example above (see Figure Figure1E1E).

Finally we estimate how the error behaves when filters overlap. This mainly happens during bursts of pulses with temporarily high spiking frequencies, which are, in general, rare events. However, using the solution which assumes independent temporal intervals instead of the time-continuous calculation (Eq. 4), only includes an additional error of order *k*=2 due to the linearity of the filter functions *h*. The error after matrix multiplication results in the square of the lowest term of the Magnus series (Eq. 11).

Thus, the easily computable group decomposition ${\tilde{\mathfrak{B}}}_{\cdot ,(k)}$ will yield accurate enough results even for long, non-bursting pulse trains.

Now we are able to calculate the temporal development of the weight dynamics in our networks. Most interesting are those developments that eventually lead to a stable, thus non-diverging, weight dynamics. Here, we develop the equations with which configurations of parameters (i.e., periodicity and delays of the recurrences) can be identified that lead to stable weights.

As mentioned, some of those (*P*, *d*_{i}) configurations converge to stable weight configurations, others do not. In order to reveal whether a configuration is stable, hence whether the plastic weights do not diverge, the nullclines ($\dot{\mathbf{\omega}}=0$) need to have stable crossing points in the open region ]−1, +1[^{R} with *R* being the number of recurrent connections (±1 would lead to diverging activity since a constant input is added periodically for a theoretically infinite number of times). A crossing point outside this interval is never stable because of the recurrent connections involved [ω(*n*+1)=ξω(*n*) with ξ>1 always diverges].

For the calculations we place all input pulses on a discrete grid, but calculate correlations by continuous analytical integration. Hence, the weight change in the discretized version of Eq. 4 is ω(*n*)=𝕭(*n*,*m*)·**ω**(*m*) where 𝕭(*n*,*m*) denotes 𝕭(*t*) with lower and upper boundaries *m* and *n* respectively. Weights are (periodically or asymptotically) stable if they take the same value after a certain number of periods *P*: **ω**(*n*)= **ω**(*n* + ξ·*P*) for ξ≥1 and *n* large enough. Here in this study we concentrate on the analytical solution for ξ=1.

Therefore, in order to calculate those fixed-point weights, the following system of equations needs to be solved:

$$\left(\widehat{\mathfrak{B}}-I\right)\cdot \mathbf{\omega}(n)=0$$

(12)

with *n* large enough so that the weights have already converged. $\widehat{\mathfrak{B}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\mathfrak{B}(n+P,n)$ is defined to include all relevant pulse correlations; it is, thus, similar to $\tilde{\mathfrak{B}}$. The roots of this system represent the fixed-points which are stable if all eigenvalues of $\widehat{\mathfrak{B}}$ are negative.

In the next step we will use the *E*,(*k*=2) simplification for $\widehat{\mathfrak{B}}$ which is ** I**+μ∫

$${\int}_{U}A(\tau )d\tau \cdot \mathbf{\omega}(n)=0.$$

(13)

Now we have to write down matrix ** A** to get its integral. With

$$\underset{U}{\int}A(\tau )d\tau =}{\displaystyle \underset{U}{\int}\left(\begin{array}{cccc}\ddots & \vdots & \u22f0& \vdots \\ \dots & \left({x}_{i}\ast h\right)(\tau )\frac{d}{d\tau}\left({x}_{j}\ast h\right)(\tau )& \dots & \left({x}_{i}\ast h\right)(\tau )\frac{d}{d\tau}(K\ast h)(\tau )\\ \u22f0& \vdots & \ddots & \vdots \\ \dots & (K\ast h)(\tau )\text{\hspace{0.17em}}\frac{d}{d\tau}\left({x}_{j}\ast h\right)(\tau )& \cdots & (K\ast h)(\tau )\text{\hspace{0.17em}}\frac{d}{d\tau}(K\ast h)(\tau )\end{array}\right)}d\tau {\displaystyle \underset{U}{\int}\left(\begin{array}{cccc}\ddots & \vdots & \u22f0& \vdots \\ \dots & {a}_{i,j}(\tau )& \dots & {q}_{i}(\tau )\\ \u22f0& \vdots & \ddots & \vdots \\ \dots & {p}_{j}(\tau )& \cdots & b(\tau )\end{array}\right)}d\tau $$

(14)

We skip the detailed calculations (including the definition for $\overline{s}$) here, given in Section “Fixed-Point Analysis” in the Appendix, and directly write the result where only correlations in the interval *U* have been considered and the pulse amplitudes Γ (Eq. A.2) were applied:

$$\begin{array}{l}{\displaystyle {\int}_{U}{q}_{i}(\tau )d\tau}={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot \Delta \mathbf{\omega}\left(-k-{d}_{i}+mP\right)}}\\ {\displaystyle {\int}_{U}{p}_{j}(\tau )d\tau}={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot \Delta \mathbf{\omega}\left(k+{d}_{j}+mP\right)}}\\ {\displaystyle {\int}_{U}b(\tau )d\tau}={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}\Delta \mathbf{\omega}\left(mP\right)}.\end{array}$$

(15)

Obviously, the number of fixed points is not restricted to one and there exist many configuration with more than one fixed-point. However, we will show below that if a configuration is stable, than it does not matter how many fixed points in the interval ]−1,1[ exist.

Here we would like to stress again that we are not interested in average quantities (no mean-field approach) but in the fine temporal dynamics of each single weight and additionally that a non-diverging weight is also a necessary condition for a non-diverging overall activity as we do not use any kind of boundaries.

Now, we have all tools ready to analyze the network structures (Figures (Figures1C1C and and2A,C)2A,C) starting without additional noise, thereafter with noise and finally by also considering asymmetrical STDP rules.

First, we will investigate the structures, which we introduced in the beginning of this study, in the absence of noise. The feed-forward structure always displays oscillation of weights and activity whereas the recurrent structures show more complex behavior. Depending on the network parameters, the weights either diverge, converge to a fixed-point or oscillate.

Here we look in more detail at symmetrical differential Hebbian plasticity with two plastic synapses (see Figure Figure1D)1D) where the simplification *E*,(*k*=2) is analytically fully solvable. We have also based the error analysis provided in Figure Figure1E1E on these calculations. For the approximation *E*,(*k*=2) the matrix results in

$${\mathfrak{B}}_{E,(2)}(t)=\mathrm{exp}\left[\left(\begin{array}{c}0\\ \mu {\nu}_{T,+1}(t)\end{array}\text{\hspace{1em}}\begin{array}{c}\mu {\nu}_{T,-1}(t)\\ 0\end{array}\right)\right]$$

(16)

where ${\nu}_{T,-1}(t)={\int}_{0}^{t}h(\tau )\text{\hspace{0.17em}}.\text{\hspace{0.17em}}dh(\tau -T)/d\tau \text{\hspace{0.17em}}.\text{\hspace{0.17em}}d\tau $ and ${\nu}_{T,+1}(t)={\int}_{0}^{t}h(\tau -T)\text{\hspace{0.17em}}.\text{\hspace{0.17em}}$ $dh(\tau )/d\tau \text{\hspace{0.17em}}.\text{\hspace{0.17em}}d\tau .$ Here we use two input neurons (*N*=2), which receive inputs at *t*=0 and *T*, respectively. Overall, this results in following solution for the weight after a pulse pair $({\tilde{\mathfrak{B}}}_{E,(2)}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\underset{\tau \to \infty}{\mathrm{lim}}{\mathfrak{B}}_{E,(2)}(\tau ))$ with the calculations provided in Section “Feed-Forward Structure” in the Appendix (special solution for ρ=1)

$${\tilde{\mathfrak{B}}}_{E,(2)}=\left(\begin{array}{c}\mathrm{cos}\left(\mu {\tilde{\nu}}_{T}\right)\\ -\mathrm{sin}\left(\mu {\tilde{\nu}}_{T}\right)\end{array}\text{\hspace{1em}}\begin{array}{c}\mathrm{sin}\left(\mu {\tilde{\nu}}_{T}\right)\\ \mathrm{cos}\left(\mu {\tilde{\nu}}_{T}\right)\end{array}\right).$$

(17)

Having found the analytical solution we now determine dynamics of the temporal development by calculating the eigenvalues λ_{i} of ${\tilde{\mathfrak{B}}}_{E,(2)}-I$ which are:

$${\lambda}_{1}=\text{exp}\left(\text{i}\mu {\tilde{\nu}}_{T}\right)-1\text{\hspace{1em}\hspace{1em}}{\lambda}_{2}=\text{exp}\left(\text{-i}\mu {\tilde{\nu}}_{T}\right)-1.$$

(18)

This shows clearly that the weights oscillate around 0 and that the oscillations of the weights are damped if the real parts of the eigenvalues λ_{i} are smaller than 0. This constraint holds for almost all *T* values as $\Re \left({\lambda}_{i}\right)=\mathrm{cos}\left(\mu {\tilde{\nu}}_{T}\right)-1=0$ is only true for a multiple of the number 2 π. However, $0\le \mu {\tilde{\nu}}_{T}<\pi $ holds since we assume μ to be small. Furthermore, $\mu {\tilde{\nu}}_{T}=0$ is trivial and does not lead to any weight change at all. Hence, the weights will continuously shrink to 0.

Next, we investigate the stability of the two neuron model with a single recurrence (see Figure Figure2A)2A) for which we need the previously provided solution for Eq. 13. In small panels to the right of Figure Figure2B2B we show three different temporal developments of the plastic weight. The top panel depicts a divergent weight (*P*=100ms and *d*=85ms), the middle panel a convergent weight development (*P*=100ms and *d*=60ms) and the bottom panel an oscillating weight (*P*=59ms and *d*=94ms). In the main panel of Figure Figure2B2B the weight values of all configurations for 0 <*P* <100ms and 0< *d*<100ms (which is 0–2.0 times the characteristic STDP length of 50ms) are plotted. In case of convergence the final weight is depicted, however, if the weight diverges, no point is plotted at all (white regions). For oscillating weights we plot the mean value.

Figure Figure2B2B reveals that if (*P*, *d*) is stable, then (*nP*, *nd*) with *n*≥ 1 is also stable (lines through origin). However, weights will not be the same for (*P*, *d*) and (*nP*, *nd*). The stability statement only holds for configurations in which the weight has not reached a value whose absolute value is ≥1. This immediately leads to divergence and is the reason why the lines through the origin not always start at the origin.

Additionally, Figure Figure2B2B shows that excitatory and inhibitory regions always alternate and that the two regions for *d*< *P* (below diagonal) are almost of the same size as the many regions for *d*> *P* (above diagonal). Furthermore, if the first delayed pulse is closer to its input pulse (i.e., *d*< *P*/2), than the final weight is inhibitory. By contrast, if the first delayed pulse is closer to the consecutive input pulse (i.e., *P*/2< *d*< *P*), than the final weight is excitatory. In the excitatory case, the pulse at the plastic synapse is followed by another pulse. This resembles a causal relationship which should lead to a weight growth. The opposite is true for the inhibitory case.

We also conducted a perturbation analysis in which we perturbed the weight for each configuration with increasing amplitude until the weight could not converge back to its fixed point. It turns out that the crucial measure for stability is whether the absolute value of the sum of the fixed-point value and the perturbation amplitude exceeds 1. This is true for almost all configurations with stable weights which is why we do not need to plot the results of the perturbation analysis.

Another property of the system is that the final weights are identical for (*P*, *d*+ *nP*) configurations for all *n*≥ 1 (see Figure Figure2B,2B, in particular the black dashed line at *P*=40ms). Thus, if the weight for e.g., *P*=40ms and *d*=25ms converges to ω, so does the weight for *P*=40ms and *d*=65ms. We use this property to simplify the convergence plots of the network structure with three neurons or two quasi recurrences (Figures (Figures2D–F)2D–F) and only depict fixed points for delay values *up to* the periodicity *P*. We show the fixed points for three different values of *P*: 0.5, 1.0, and 1.5 times the characteristic STDP time window (we leave out the 2.0 times as we do not get anything novel at higher periodicity values according to Figure Figure22B).

Starting with *P*=25ms (Figure (Figure2D)2D) it is apparent that only a few inhibitory connections evolved (only about 10%). However, when moving to higher *P* values (Figures (Figures2E,F),2E,F), more and more inhibitory connections show up. If we had plotted (not shown) the ratio of excitatory connections, we would find that at about *P*=100ms the ratio of excitatory connections reaches 0.5 and for all smaller values the ratio is always >0.5, thus more excitatory connections develop. We note that all properties mentioned in this paragraph remain essentially the same, when adding noise to the input.

Figures Figures2E,F2E,F reveal that the main regions (upper-right, middle) move to higher (*d*_{1}, *d*_{2}) configurations and enlarge at the same time. In addition, new regions originate at smaller (*d*_{1}, *d*_{2})configurations.

The total number of fixed points *N*_{0} scaled by the number of the possible configurations, of which there are *P*^{2}, also grows with increasing periodicity (see Figure Figure3,3, black dots). In the next section we will compare the number of fixed points to two different noisy input protocols.

Above we used deterministic timing at the input to our system. As this is not realistic, we will now use two different types of noise. The first one is Gaussian jitter around the given periodicity value *P* with a standard deviation of 5ms. The second one is Poisson firing with rate *P*. Due to the stochastic nature of the signals, analytical results cannot be obtained anymore and we have to rely on simulations.

The general behavior of the feed-forward structure does not change in the presence of either noise. Although the oscillations are not “perfect” anymore, weights are still oscillating and eventually they decay to 0.

For the recurrent structures the situation is different as noise is able to change an initially unstable configuration to now exhibit a convergent weight development and vice versa. In Figure Figure44 we show the stability of the weights for different configurations. Here, red stands for highly stable weights (i.e., all trials lead to convergence) and blue for rarely stable ones (i.e., only a few trials lead to convergence). Stability in between where half of the trials lead to convergence is very rare, so we included those configurations to the rarely stable ones. In order to compare the noise results with those for no noise, we include the no noise stable regions in gray in Figure Figure4.4. Additionally, the color of each configuration that is stable both with and without noise is lighter (i.e., light red and light blue). We do not plot the actual final weight values as those are not largely different from the no noise ones.

For the two neuron structure, a few of the branches in Figures Figures4A,B4A,B completely disappeared, others moved slightly. The two main branches for *d*< *T* (below diagonal) remain highly stable, however, only in the “core”. The more diffuse, non-compact regions, i.e., regions with detached stable configurations become either completely unstable or at least rarely stable. Noise affects the input periodicity, thus configurations that are stable for a large range of different *P* values survive when noise has been added. By contrast, configurations within non-compact regions are always deflected toward unstable configurations, thus, compromising or even losing their stability. However, in a few trials they still happen to survive which leads to the regions with many rarely stable configurations.

It is clearly visible that the number of stable configurations for the structure with a single recurrence substantially decreases. This is, however, not the case for the double recurrence structure. There, the number of stable configurations increases (see red border in Figures Figures4G,H)4G,H) because of the compact structure. However, this effect starts above *P*=30ms which is revealed by Figure Figure3.3. In this figure we plot the number of stable (non-divergent) configurations *N*_{0} normalized by the maximum possible configurations (*P*^{2}) against the periodicity *P*. Only between about 30 and 60ms noise increases the number of stable configurations by about 20%. Above 60ms the smaller non-compact areas that lose stability dominate over the border areas that gained stability. Above a certain periodicity the smaller areas become large enough so that noise becomes advantageous again and border regions become stable. Therefore, the effect that noise slightly increases stability reoccurs periodically. Figure Figure33 also shows clearly that Poisson statistics leads to slightly more stable configurations than adding Gaussian noise. Most important, however, is the fact that noise does not deteriorate the system, i.e., the number of stable configurations stays almost unchanged.

Similar to the feed-forward structure, weights sometimes change from positive to negative values (or vice versa), hence they switch from being excitatory to being inhibitory which is not known from biology. However, this could be easily solved if we exchange our linear synapse with two non-linear synapses (one that is linear in the positive regime and 0 elsewhere and the other vice versa), so that such a push–pull mechanism covers the full linear range. This is the conventional way of addressing this issue (Pollen and Ronner, 1982; Ferster, 1988; Palmer et al., 1991; Heeger, 1993; Wörgötter et al., 1998). Note, however, in general we found that without noise switching is rather rare in our recurrent structures anyways. Even with additional noise weights tend to switch only if the final weight is close to 0.

In biological systems, the STDP curve is not symmetrical (Bi and Poo, 1998; Caporale and Dan, 2008). In order to achieve an asymmetric differential Hebbian plasticity curve, we leave the positive part which describes the strengthening of the weight (LTP) as it is and shrink or stretch the negative part which describes the weakening (LTD). This is done by multiplying negative weight changes with a factor ρ.

For the feed-forward structure we calculated the temporal development analytically (see Section “Feed-Forward Structure” in the Appendix) and find that the general behavior is not changed by the asymmetry. This becomes visible from the eigenvalues of the system:

$${\lambda}_{1}=\text{exp}\left(\text{i}\mu {\tilde{\nu}}_{T}\right)-1\text{\hspace{1em}\hspace{1em}}{\lambda}_{2}=\text{exp}\left(\text{-i}\mu {\tilde{\nu}}_{T}\right)-1.$$

(19)

Both oscillation and damping still exist, however, damping constant and frequency of the oscillation is scaled by the asymmetry ρ.

Now we come to the recurrent structures. Here, similar to the introduction of noise, the situation is different. With an additional asymmetry the number of stable configurations changes. In the following, we depict the number of stable configuration achieved with a certain asymmetry ρ=2^{a} (logarithmic representation) by *N*_{a}. Thus, the number of stable configuration for the symmetric STDP curve is still *N*_{0}. Then, to better compare the different results, we normalize *N*_{a} to the symmetric number *N*_{0} plotting *N*_{a}/*N*_{0} in Figure Figure5.5. It is worthwhile to note that the total (not the relative) number of stable configurations without noise is reduced for any value of *a* for the single recurrence structure and stays almost unchanged for the double recurrence structure (see Figure Figure33).

First, we look in more detail at the single recurrence structure (Figure (Figure5A).5A). We plot the normalized number *N*_{a}/*N*_{0} of stable configurations up to a periodicity of *P*=50ms. The respective *N*_{0} values can be found in the caption of Figure Figure5.5. The no noise results have a peak for positive and for negative asymmetries, thus an asymmetric STDP curve yields more stable configurations than a symmetric one. Here, a dominant LTP part is advantageous over a dominant LTD part. This situation changes when we introduce noise. An STDP curve with a dominant LTD part results now in more stable configurations than a LTP-dominant STDP curve. However, almost all asymmetric STDP curves lead to many more stable configurations as compared to a symmetric STDP curve. Here we note again that making the input spiking obeying the Poisson statistics is advantageous over adding Gaussian noise (if comparing absolute numbers).

Next, we consider the double recurrence structure (Figure (Figure5B)5B) where the number of stable configurations were also counted up to *P*=50ms. Here, the no noise result looks more similar to the results with noise; two peaks, one for an LTP-dominant and one for an LTD-dominant STDP curve, are visible. However, adding noise to the system again changes the relative heights of the peaks: the STDP curve with more LTD becomes advantageous over the STDP curve with more LTP. Similar to the single recurrence structure, there are more stable configurations when using Poisson noise, however, with Gaussian noise the boost in stability for asymmetrical as compared to the symmetrical STDP curve is larger. If we compare the number of stable configurations between LTP-dominant and LTD-dominant STDP curves we find that without noise about 55.0% LTP-dominant configurations of the total number of configurations exists. With Gaussian noise this ratio decreases to 41.7% and with Poisson statistics to 44.0%.

The dynamics of the structures examined here (and the reasoning should extend to all possible structures) would allow only for diverging weights if we set *a* →±∞. As this would correspond to an STDP curve with pure LTP or LTD respectively, the dynamics pushes the weight only in one direction (either positive or negative) which leads to no fixed point (see Eqs. 14 and 15). This is also indicated in Figure Figure55 where the number of configurations leading to convergence is severely reduced even for *a*=±4.

Real neurons often display rich, non-stationary, firing patterns by which all synaptic weights will be affected. The so far existing solutions which describe Hebbian learning, on the other hand, constrain the temporal dynamics of the system or limit plasticity to a subset of synapses. With the solution presented here we can calculate weight changes for the first time without these restrictions. This is a valuable step forward in our understanding of synaptic dynamics of general Hebbian plasticity as well as of STDP (which is a special case of the general Hebbian plasticity rule) in different networks. Specifically, we have presented the time-continuous solution for the synaptic change of general Hebbian plasticity (Eqs 9 and 11), its approximation for general spiking or continuous inputs as well as a specific solution for non-bursting pulse trains. Of practical importance is the fact that the error of the computable approximations remains small even for long pulse trains.

We think that there is in general no intuitive access, which would allow us to understand the temporal development of activity and plasticity in networks with recurrent connections. Even small structures driven by periodic activity, such as the ones investigated here, display unexpected and counterintuitive behavior. The systems investigated are linear and summation takes place at the level of PSPs, which are here modeled by ways of the filter function *h*. This way closed-form solutions can be found. Spiking could be introduced by ways of a threshold. Introducing such a threshold essentially leads to the reduction of activity (eliminating all small linearly summed signals) but the general recurrence is not altered.

The first network investigated was a simple feed-forward structure (see Figure Figure1C)1C) which displays oscillating weights which decay to 0. This observation is interesting and not expected as this network and its inputs represent a fully symmetrical situation. Hence, oscillation should occur, one would think, but without damping. Asymmetrical STDP curves do not alter this observation, because the asymmetry of the STDP curve only affects the frequency of the weight oscillation.

The focus of the current study, however, was to investigate recurrent network structures without averaging the activity. The two small recurrent networks studied here can be seen as fundamental network building blocks. This is due to the fact that the direct recurrent connections used here could also be replaced by a network, the output of which providing the recurrent signals. This seems to be similar to Roberts (2004), Burkitt et al. (2007), and Gilson et al. (2009), however, in those studies only average activities are considered. Roberts (2004) does not treat plasticity, Burkitt et al. (2007) and Gilson et al. (2009) do not have self-connections, and Roberts (2004) and Burkitt et al. (2007) do not have delays included. Without delays, self-connections would be cumbersome to implement in a general way (i.e., for different delays a different number of neurons need to be interposed). However, Burkitt et al. (2007) and also Gilson et al. (2009) could easily extend their models to include self-connections.

As a side remark, almost all of the above mentioned studies only consider pair-wise STDP correlations although a method for three-pair couplings exists (Pfister and Gerstner, 2006). By contrast, differential Hebbian plasticity treats multi-correlations naturally.

In the much cited study of Bi and Poo (1998) a clear asymmetry is visible in the STDP curve (Figure 7 of Bi and Poo, 1998): the part which describes the strengthening of synaptic efficiency, LTP, is more pronounced than the LTD part. In general, asymmetrical STDP curves are found in many studies (see Caporale and Dan, 2008 for a review) where sometimes the situation is reversed (LTD more pronounced than LTP). Many theoretical studies of large-scale network development use STDP curves with stronger LTD part, because in many cases it has been found that otherwise weights will diverge (Song et al., 2000; Izhikevich et al., 2004). The current study, on the other hand, suggests that collective divergent behavior could also be prevented by more specific network designs. Here we were able to show that stability without boundaries is possible. We observe that – at least in those small linear networks – many fixed points exist for different types of STDP curves. Notably there are almost always more fixed points for asymmetrical as compared to the symmetrical STDP curve. Furthermore, the notion that an LTD dominance would be beneficial for stability does not seem to be unequivocally correct. Even in the presence of noise large numbers of fixed points are found when using LTP-dominated STDP curves.

The studies of Izhikevich et al. (2004), Burkitt et al. (2007), Morrison et al. (2007), Lubenov and Siapas (2008), Gilson et al. (2009), and Clopath et al. (2010) have considered fully as well as randomly connected networks with external drive, where they need to apply soft or even hard boundaries to stabilize weights. By contrast here we have looked at small recurrent networks without averaging and without boundaries. Thus, these approaches cannot be directly compared. Our results, however, appear promising. It may well be that networks with specific topologies can be constructed (or generated by self-organization), where stability exists using STDP for a large number of input patters without boundaries or weight-normalization.

Additionally, there exists a Hebbian rule that is not using boundaries either. It has been shown that the so called BCM rule (Bienenstock et al., 1982) is a rate description-based rule which is analogous to the STDP rule (Izhikevich and Desai, 2003; Pfister and Gerstner, 2006). However, to get weights stabilized, BCM is using an additional variable that pushes the weights so that the output activity is close to a set value. This rule, besides that it relies on a rate model, uses the activity to limit the weights in an indirect way, similar to other homeostasis mechanisms (Turrigiano and Nelson, 2004). Thus, it is possible that the BCM rule among others makes the weights diverge or rather diffuse which, nevertheless, would lead to a state with a finite and stable mean activity. Such a behavior does not emerge in our model as a single diverging weight will eventually reach infinity resulting in an infinite activity. Besides, our focus here is on the fine temporal dynamics of the weights.

Furthermore, we observe that noise has an influence on the recurrent structures (see Figures Figures2A,C).2A,C). For some input periodicities *P* the number of stable configurations decreases, for other settings it grows. Growth is found especially for the double-recurrent structure. A reduced number of stable configurations is mainly visible in areas with many initially stable *isolated* points from where such a point can be easily “nudged out” to an unstable domain by noise. Such a nudging effect, however, is also the reason why the number of stable configurations can increase. Our results show that stable regions either enlarge at their boundaries or within dense but not fully covered areas. Here initially unstable data points are “nudged in” to adjacent stable regions. The difference between Gaussian and Poisson statistic is small, however, we found that Poisson statistics, thus the statistics most often used to describe neuronal spiking (Bair et al., 1994; Shadlen and Newsome, 1998; Dayan and Abbott, 2001), yields more stability. More importantly, noise does not deteriorate but even improves the system. The benefits of noise for neuronal signal processing has been discussed in several other studies (Fellous et al., 2003; Deco and Romo, 2008).The fact that synaptic stability benefits from noise, especially in the more complex double-recurrent structures, adds to this discussion. In our networks noise helps to reach out to nearby stable regions to exploit their stability.

The temporal development of multi-synapse systems and the conditions of stability are still not well understood. Some convergence conditions have been found (see for example Hopfield, 1982; Miller and MacKay, 1994; Roberts, 2000; van Rossum et al., 2000; Kempter et al., 2001; Burkitt et al., 2007; Kolodziejski et al., 2008), however, in general the synaptic strengths of such networks will diverge or oscillate. This is undesired, because network stability is important for the formation of, e.g., stable memories or receptive fields. Using the time-continuous solution for linear Hebbian plasticity described here serves therefore as a starting point to better understand mechanisms, structures and conditions for which stable network configurations will emerge. The rich dynamics, which govern many closed-loop adaptive, network based physical systems can, thus, now be better understood and predicted, which might have substantial future influence for the guided design of network controlled systems.

An important next step to improve our understanding of the interaction of plasticity and activity is to increase the complexity of the investigated network structures. Raising the number of recurrences is straightforward as we have already presented a general method to solve such networks. It will be more challenging to include plastic synapses at more than one neuron. Figure Figure6A6A shows as a small outlook such a possible network structure where we used *N*=5 “building blocks” (see Figure Figure2C)2C) each receiving constant periodic input (i.e., pulses), activity from itself (self-connection) and from the previous neuron (ring-connection). In this example we set all delays to the same value (*d*_{i}=*d* and *d*_{is}=*d*) and the input has periodicity *P* (see Figures Figures6C,D).6C,D). Each neuron *i* receives its first input spike at time *t*=(*i* −1)·*P*/*N*. Both the self-connection and the ring-connection are eligible to changes and Figures Figures6C,D6C,D show that, similar to our fundamental building blocks (see Figures Figures1D1D and and2A,C),2A,C), configurations which lead to converging weights exist. The temporal development depicted in Figure Figure6B6B exemplifies the typical behavior found in those networks: due to its symmetry all self-connections and all ring-connections converge each to practically the same weight value. Many possible network topologies could visioned using those or similar building blocks. Hence, investigating those structures in more detail goes beyond the scope of this paper.

Also more complex input protocols must be investigated. The current study suggest that dynamically stable recurrent sub-structures can exist within larger networks. Similar results exist for the generation of stable activity patterns in (non-plastic) networks (Memmesheimer and Timme, 2006a,b). These studies may allow us to gradually approach a better understanding of dynamically changing modular networks going beyond the currently still dominating unifying approaches, which mostly only consider feed-forward structures or randomly connected networks with average activities. This will require intensive studies but we hope that the current contribution can provide useful analytical tools and first insights into these very difficult problems.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

We wish to thank M. Timme and R.-M. Memmesheimer for fruitful discussions and the German Ministry for Education and Research (BMBF) via the Bernstein Center for Computational Neuroscience (BCCN) Göttingen under Grant No. 01GQ1005A and 01GQ1005B as well as the Max Planck Society for financial support.

In the main text we only calculated the timings of the pulses. In order to determine the amplitude of each pulse of a system with *R* recurrent connections, we need to go further and solve a linear system of equations. Doing this we need to assume that the weights eventually converge, therefore being constant.

In more detail, in each period of length *P* there are at most *P* pulses, each having different amplitude. Within each period each pulse gets multiplied by certain weights (which we assume to be constant) and is then delayed. Additionally, the input provides the neuron with another pulse of amplitude 1. Hence, if we write the pulse amplitudes in a vector **Γ** and put the multiplicative factors (i.e., the weights) into a matrix **Λ** always at the delayed position, we find the pulse amplitude **Γ** of the next period according to

$${\mathbf{\Gamma}}^{(k+P)}=\mathbf{\Lambda}{\mathbf{\Gamma}}^{(k)}+\mathbf{\lambda}$$

(A.1)

with **λ**=(1,0,…,0)^{T}. Additionally, we set ${\mathbf{\Lambda}}_{n,m={m}_{i}}={\mathbf{\omega}}_{i}$ and ${\Lambda}_{n,m\ne {m}_{i}}=0$ where *m*_{i}=mod (*n*−*d*_{i},*P*). Since Eq. A.1 is linear, the amplitudes of the pulses will converge to a certain amplitude $\mathbf{\Gamma}=\underset{k\to \infty}{\mathrm{lim}}{\mathbf{\Gamma}}^{(k)}$ and we get

$$\mathbf{\Gamma}={(I-\mathbf{\Lambda})}^{-1}\mathbf{\lambda}.$$

(A.2)

The solution results in equations that involve high polynomial terms and are, thus, complicated to simplify. An example for the pulse timings with *P*=10, *d*_{1}=4 and *d*_{2}=6 (here in arbitrary units) can be found below:

$$\mathbf{\Lambda}=\left(\begin{array}{cccccccccc}0& 0& 0& 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0& 0& 0\\ 0& 0& 0& 0& 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0& 0\\ 0& 0& 0& 0& 0& 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0\\ 0& 0& 0& 0& 0& 0& 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}\\ {\mathbf{\omega}}_{1}& 0& 0& 0& 0& 0& 0& 0& {\mathbf{\omega}}_{2}& 0\\ 0& {\mathbf{\omega}}_{1}& 0& 0& 0& 0& 0& 0& 0& {\mathbf{\omega}}_{2}\\ {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0& 0& 0& 0& 0& 0& 0\\ 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0& 0& 0& 0& 0& 0\\ 0& 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0& 0& 0& 0& 0\\ 0& 0& 0& {\mathbf{\omega}}_{2}& 0& {\mathbf{\omega}}_{1}& 0& 0& 0& 0\end{array}\right);\text{\hspace{0.17em}\hspace{0.17em}}\Gamma =\left(\begin{array}{c}-1+{\mathbf{\omega}}_{1}{\mathbf{\omega}}_{2}(3-{\mathbf{\omega}}_{1}{\mathbf{\omega}}_{2})\\ 0\\ -{\mathbf{\omega}}_{1}^{3}-{\mathbf{\omega}}_{1}^{2}+{\mathbf{\omega}}_{1}{\mathbf{\omega}}_{2}^{3}\\ 0\\ -{\mathbf{\omega}}_{1}+2{\mathbf{\omega}}_{1}^{2}{\mathbf{\omega}}_{2}-{\mathbf{\omega}}_{2}^{4}\\ 0\\ -{\mathbf{\omega}}_{1}^{4}-{\mathbf{\omega}}_{2}+2{\mathbf{\omega}}_{1}{\mathbf{\omega}}_{2}^{2}\\ 0\\ -{\mathbf{\omega}}_{1}^{2}+{\mathbf{\omega}}_{1}^{3}{\mathbf{\omega}}_{2}-{\mathbf{\omega}}_{2}^{3}\\ 0\end{array}\right)$$

(A.3)

In order to get the fixed points, we start with Eq. 14

$$\underset{U}{\int}A(\tau )d\tau =}{\displaystyle \underset{U}{\int}\left(\begin{array}{cccc}\ddots & \vdots & \u22f0& \vdots \\ \cdots & \left({x}_{i}\ast h\right)(\tau )\frac{d}{d\tau}\left({x}_{j}\ast h\right)(\tau )& \cdots & \left({x}_{i}\ast h\right)(\tau )\frac{d}{d\tau}(K\ast h)(\tau )\\ \u22f0& \vdots & \ddots & \vdots \\ \cdots & (K\ast h)(\tau )\text{\hspace{0.17em}}\frac{d}{d\tau}\left({x}_{j}\ast h\right)(\tau )& \cdots & (K\ast h)(\tau )\text{\hspace{0.17em}}\frac{d}{d\tau}(K\ast h)(\tau )\end{array}\right)}\text{\hspace{0.17em}}d\tau \text{\hspace{0.28em}}={\displaystyle \underset{U}{\int}\left(\begin{array}{cccc}\ddots & \vdots & \u22f0& \vdots \\ \cdots & {a}_{i,j}(\tau )& \cdots & {q}_{i}(\tau )\\ \u22f0& \vdots & \ddots & \vdots \\ \cdots & {p}_{j}(\tau )& \cdots & b(\tau )\end{array}\right)d\tau}\text{\hspace{0.17em}$$

(B.1)

where *U* is defined to be a meaningful range. It is, however, simpler to restrict the pulse timings to a meaningful range *S* (defined later) to arrive at a comprehensive description of the equations. In this case, the integral needs to range from 0 to ∞ and we have to start with all possible pulses for the input in this range.

The input $K(t)={\Sigma}_{n=s}^{\infty}\delta (t-nP)$ (concerning *s* see below) to the constant synapse is a sum of δ functions always at the beginning of a period. As already mentioned in the main part, the δ function δ(*t*− *t*_{i}) for pulse time *t*_{i} simplifies the convolution to a temporal shift in the filter function $h\text{\hspace{0.05em}}:{\int}_{0}^{\infty}\delta (t-{t}_{i}-\tau )h(\tau )d\tau =h(t-{t}_{i}).$ Further on, the inputs *x*_{i} to the plastic synapses are given by the pulse amplitudes Γ, however, delayed by different delay times *d*_{i}. Thus, we have to exchange the inputs *x*_{i} with the full representation of all pulses of the whole temporal development from time *s* · *P* (at which the pulse amplitudes Γ already converged) to infinity: ${x}_{i}={\Sigma}_{n=s}^{\infty}{\Sigma}_{k=1}^{P}{\Gamma}_{k}\cdot \delta (\tau -nP-{d}_{i}-k)$. At the same time we simplify all the δ functions of the input.

$$\begin{array}{l}{a}_{i,j}(\tau )={\displaystyle {\sum}_{n=s}^{\infty}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot h\left(\tau -nP-{d}_{i}-k\right)}}\\ \text{\hspace{1em}\hspace{0.17em}\hspace{0.17em}\hspace{1em}}\times {\displaystyle {\sum}_{m=s}^{\infty}{\displaystyle {\sum}_{l=1}^{P}{\Gamma}_{l}\cdot \frac{d}{d\tau}h\left(\tau -mP-{d}_{j}-l\right)}}\\ {q}_{i}(\tau )={\displaystyle {\sum}_{n=s}^{\infty}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot h\left(\tau -nP-{d}_{i}-k\right)\cdot}}{\displaystyle {\sum}_{m=s}^{\infty}\frac{d}{d\tau}h(\tau -mP)}\\ {p}_{j}(\tau )={\displaystyle {\sum}_{n=s}^{\infty}h(\tau -nP)\cdot}{\displaystyle {\sum}_{m=s}^{\infty}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot \frac{d}{d\tau}h\left(\tau -mP-{d}_{j}-k\right)}}\\ b(\tau )={\displaystyle {\sum}_{n=s}^{\infty}h(\tau -nP)\cdot}{\displaystyle {\sum}_{m=s}^{\infty}\frac{d}{d\tau}h(\tau -mP)}\end{array}$$

Next, we neglect all terms but one on the pre-synaptic site (index *n*) as spiking is repetitive anyways. Additionally, we reduce the range of the sum from infinity to a meaningful range *S* that is determined by the time window *W* of the STDP curve under consideration (e.g., 50ms in the main part): $S=[-\overline{s},\overline{s}]$ with where $\overline{s}=W/P$ ξΊ is the ceiling function.

$$\begin{array}{c}{a}_{i,j}(\tau )={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\displaystyle {\sum}_{l=1}^{P}{\Gamma}_{k}\cdot {\Gamma}_{l}\cdot h\left(\tau -k-{d}_{i}\right)\cdot \frac{d}{d\tau}h\left(\tau -mP-l-{d}_{j}\right)}}}\\ {q}_{i}(\tau )={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot h\left(\tau -k-{d}_{i}\right)\cdot \frac{d}{d\tau}h\left(\tau -mP\right)}}\\ {p}_{j}(\tau )={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot h(\tau )\cdot \frac{d}{d\tau}h\left(\tau -mP-k-{d}_{j}\right)}}\\ b(\tau )={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}h(\tau )\cdot \frac{d}{d\tau}h\left(\tau -mP\right)}\end{array}$$

and solve the integral

$$\begin{array}{c}{\displaystyle {\int}_{0}^{\infty}{a}_{i,j}}(\tau )d\tau ={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{p}{\displaystyle {\sum}_{l=1}^{p}{\Gamma}_{k}\cdot {\Gamma}_{l}\cdot \Delta}}}\mathbf{\omega}(l-k+{d}_{j}-{d}_{i}+mP)\\ {\displaystyle {\int}_{0}^{\infty}{q}_{i}(\tau )d\tau ={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot \Delta \mathbf{\omega}\left(-k-{d}_{i}+mP\right)}}}\\ {\displaystyle {\int}_{0}^{\infty}{p}_{j}(\tau )d\tau ={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}{\displaystyle {\sum}_{k=1}^{P}{\Gamma}_{k}\cdot \Delta \mathbf{\omega}\left(k+{d}_{j}+mP\right)}}}\\ {\displaystyle {\int}_{0}^{\infty}b(\tau )d\tau ={\displaystyle {\sum}_{m=-\overline{s}}^{\overline{s}}\Delta \mathbf{\omega}\left(mP\right)}}\end{array}$$

(B.2)

where we replaced ${\int}_{0}^{\infty}h(\tau +T)\text{\hspace{0.17em}}.\text{\hspace{0.17em}}dh(\tau )\text{\hspace{0.17em}}/d\tau \text{\hspace{0.17em}}.\text{\hspace{0.17em}}d\tau $ with the weight change function Δv(*T*) representing STDP. The final step is to find all roots of the *R* top-most rows of matrix (see Eq. 14) which lead to a negative Jacobi matrix at this point. Those are the stable fixed-points.

As we use the *E*,(*k*=2) approximation, we truncate the Magnus series after the first term and additionally assume that the negative changes (since we can set *T*>0 without loss of generality, only $h(\tau -T)\cdot dh(\tau )/d\tau $ yields negative changes) are scaled by ρ which resembles the asymmetric STDP curve. The first term writes then

$$\begin{array}{l}\mathfrak{A(}t)={\displaystyle {\int}_{0}^{t}\left(\begin{array}{cc}h(\tau )\cdot \frac{d}{d\tau}h(\tau )& h(\tau )\cdot \frac{d}{d\tau}h(\tau -T)\\ \rho \cdot h(\tau -T)\cdot \frac{d}{d\tau}h(\tau )& h(\tau -T)\cdot \frac{d}{d\tau}h(\tau -T)\end{array}\right)}\text{\hspace{0.17em}}d\tau \\ \text{\hspace{1em}\hspace{1em}}=\left(\begin{array}{cc}\frac{1}{2}{h}^{2}(t)& {\nu}_{T,-1}(t)\\ \rho \cdot {\nu}_{T}(t)& \frac{1}{2}{h}^{2}(t-T)\end{array}\right)\end{array}$$

(C.1)

where the two inputs receive a pulse at *t*=0 and at *t*= *T* respectively. The functions ${\nu}_{T,\pm 1}(t)$ have an analytical solution which is

$$\begin{array}{c}{\nu}_{T}(t)=\frac{\Theta (t-T)\Theta (t)}{2(\alpha +\beta ){\sigma}^{2}}\left(\eta \text{\hspace{0.17em}sign\hspace{0.17em}}(T)\sigma (\alpha -\beta )h(\left|T\right|)-2{\text{e}}^{-t(\alpha +\beta )}\right)\\ \left(\alpha {\text{e}}^{\alpha T}+\beta {\text{e}}^{\beta T}\right)+(\alpha +\beta )\left({\text{e}}^{-\alpha (2t-T)}+{\text{e}}^{-\beta (2t-T)}\right)\end{array}$$

(C.2)

In the limit of *t* to infinity matrix (*t*) changes into $\tilde{\mathfrak{A}}$ and so do the secondary diagonal elements

$${\tilde{\nu}}_{T,\eta}(t)=\underset{t\to \infty}{\mathrm{lim}}{\nu}_{T,\eta}(t)=\eta \text{\hspace{0.17em}sign\hspace{0.17em}}(T)\frac{\alpha -\beta}{2(\alpha +\beta )\sigma}h(\left|T\right|).$$

(C.3)

and the diagonal elements vanish to 0 as $\underset{t\to \infty}{\mathrm{lim}}h(t)=0$ and *h*(0)=0. Furthermore, we find that ${\tilde{\nu}}_{T}={\tilde{\nu}}_{T,+1}=-{\tilde{\nu}}_{T,-1},$ for the considered filter function ${\tilde{\nu}}_{T}$ is positive definite as α is smaller than β. Therefore $\tilde{\mathfrak{A}}$ results in

$$\tilde{\mathfrak{A}}=\underset{t\to \infty}{\mathrm{lim}}\mathfrak{A}(t)=\left(\begin{array}{cc}0& {\tilde{\nu}}_{T}\\ -\rho \cdot {\tilde{\nu}}_{T}& 0\end{array}\right)={\nu}_{T}\left(\begin{array}{cc}0& 1\\ -\rho & 0\end{array}\right).$$

(C.4)

As the square is ${\tilde{\mathfrak{A}}}^{2}=-{\tilde{\nu}}_{T}^{2}\rho I,$ we get for an error of order *E*,(*k*=2):

$$\begin{array}{c}{\tilde{\mathfrak{B}}}_{E,(2)}=\mathrm{exp}\mu \tilde{\mathfrak{A}}={\displaystyle \sum _{n=0}^{\infty}\frac{1}{n!}}{(\mu \tilde{\mathfrak{A}})}^{n}\\ ={\displaystyle \sum _{n=0}^{\infty}\frac{{(-1)}^{n}}{(2n)!}{\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)}^{2n}I+}{\displaystyle \sum _{n=0}^{\infty}\frac{{(-1)}^{n}}{(2n+1)!}{\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)}^{2n+1}J}\\ =\mathrm{cos}\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)I+\mathrm{sin}\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)J\\ =\left(\begin{array}{cc}\mathrm{cos}\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)& \frac{1}{\sqrt{\rho}}\mathrm{sin}\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)\\ -\sqrt{\rho}\mathrm{sin}\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)& \mathrm{cos}\left(\mu {\tilde{\nu}}_{T}\sqrt{\rho}\right)\end{array}\right)\end{array}$$

(C.5)

where

$$J=\left(\begin{array}{cc}0& \frac{1}{\sqrt{\rho}}\\ -\sqrt{\rho}& 0\end{array}\right)\cdot $$

This solution was used to calculate the difference Δ_{·,(k)} for different values of *T* in Figure Figure1E1E.

- Bair W., Koch C., Newsome W., Britten K. (1994). Power spectrum analysis of bursting cells in area MT in the behaving monkey. J. Neurosci. 74, 2870–2892 [PubMed]
- Bi G., Poo M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 [PubMed]
- Bienenstock E., Cooper L. N., Munro P. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 23–48 [PubMed]
- Burkitt A. N., Gilson M., van Hemmen J. L. (2007). Spike-timing-dependent plasticity for neurons with recurrent connections. Biol. Cybern. 96, 533–54610.1007/s00422-007-0148-2 [PubMed] [Cross Ref]
- Caporale N., Dan Y. (2008). Spike-timing-dependent plasticity: a Hebbian learning rule. Annu. Rev. Neurosci. 31, 25–4610.1146/annurev.neuro.31.060407.125639 [PubMed] [Cross Ref]
- Clopath C., Büsing L., Vasilaki E., Gerstner W. (2010). Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat. Neurosci. 13, 344–35210.1038/nn.2479 [PubMed] [Cross Ref]
- Dayan P., Abbott L. F. (2001). Theoretical Neuroscience. Cambridge, MA: MIT Press
- Deco G., Romo R. (2008). The role of fluctuations in perception. Trends Neurosci. 31, 591–59810.1016/j.tins.2008.08.007 [PubMed] [Cross Ref]
- Fellous J., Rudolph M., Destexhe A., Sejnowski T. (2003). Synaptic background noise controls the input/output characteristics of single cells in an in vitro model of in vivo activity. Neuroscience 122, 811–82910.1016/j.neuroscience.2003.08.027 [PMC free article] [PubMed] [Cross Ref]
- Ferster D. (1988). Spatially opponent excitation and inhibition in simple cells of the cat visual cortex. J. Neurosci. 8, 1172–1180 [PubMed]
- Gilson M., Burkitt A. N., Grayden D. B., Thomas D. A., van Hemmen J. L. (2009). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks iv. Biol. Cybern. 101, 427–44410.1007/s00422-009-0346-1 [PubMed] [Cross Ref]
- Gütig R., Aharonov R., Rotter S., Sompolinsky H. (2003). Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. J. Neurosci. 23, 3697–3714 [PubMed]
- Heeger D. J. (1993). Modeling simple-cell direction selectivity with normalized, half-squared, linear operators. J. Neurophysiol. 70, 1885–1898 [PubMed]
- Hopfield J. J. (1982). Neural networks and physical systems with emergent collective computational properties. Proc. Natl. Acad. Sci. U.S.A. 79, 2554–255810.1073/pnas.79.8.2554 [PubMed] [Cross Ref]
- Izhikevich E. M., Desai N. S. (2003). Relating STDP to BCM. Neural Comput. 15, 1511–152310.1162/089976603321891783 [PubMed] [Cross Ref]
- Izhikevich E. M., Gally J. A., Edelman G. M. (2004). Spike-timing dynamics of neuronal groups. Cereb. Cortex 14, 933–94410.1093/cercor/bhh053 [PubMed] [Cross Ref]
- Kempter R., Gerstner W., van Hemmen J. L. (2001). Intrinsic stabilization of output rates by spike-based Hebbian learning. Neural. Comput. 13, 2709–274110.1162/089976601317098501 [PubMed] [Cross Ref]
- Klopf A. H. (1988). A neuronal model of classical conditioning. Psychobiology 16, 85–123
- Kolodziejski C., Porr B., Wörgötter F. (2008). Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison. Biol. Cybern. 98, 259–27210.1007/s00422-007-0209-6 [PMC free article] [PubMed] [Cross Ref]
- Kosco B. (1986). “Differential Hebbian learning,” in Neural Networks for Computing: AIP Conference Proceedings, Vol. 151, ed. Denker J. S., editor. (New York: American Institute of Physics; ), 277–28210.1063/1.36225 [Cross Ref]
- Lubenov E. V., Siapas A. G. (2008). Decoupling through synchrony in neuronal circuits with propagation delays. Neuron 58, 118–13110.1016/j.neuron.2008.01.036 [PubMed] [Cross Ref]
- Magee J. C., Johnston D. (1997). A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons. Science 275, 209–21310.1126/science.275.5297.209 [PubMed] [Cross Ref]
- Magnus W. (1954). On the exponential solution of differential equations for a linear operator. Commun. Pure Appl. Math. VII, 649–67310.1002/cpa.3160070404 [Cross Ref]
- Markram H., Lübke J., Frotscher M., Sakmann B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–21510.1126/science.275.5297.213 [PubMed] [Cross Ref]
- Memmesheimer R.-M., Timme M. (2006a). Designing complex networks. Physica D 224, 182–20110.1016/j.physd.2006.09.037 [Cross Ref]
- Memmesheimer R.-M., Timme M. (2006b). Designing the dynamics of spiking neural networks. Phys. Rev. Lett. 97, 188101.10.1103/PhysRevLett.97.188101 [PubMed] [Cross Ref]
- Miller K. D., MacKay D. J. C. (1994). The role of constraints in Hebbian learning. Neural Comput. 6, 100–12610.1162/neco.1994.6.1.100 [Cross Ref]
- Morrison A., Aertsen A., Aertsen A. (2007). Spike-timing-dependent plasticity in balanced random networks. Neural. Comput. 19, 1437–146710.1162/neco.2007.19.6.1437 [PubMed] [Cross Ref]
- Palmer L., Jones J., Stepnowski R. (1991). “Striate receptive fields as linear filters: characterization in two dimensions of space,” in The Neural Basis of Visual Function. Vision and Visual Dysfunction, Vol. 4, ed. Leventhal A. G., editor. (Boca Raton: CRC Press; ), 246–265
- Pfister J.-P., Gerstner W. (2006). Triplets of spikes in a model of spike timing-dependent plasticity. J. Neurosci. 26, 9673–968210.1523/JNEUROSCI.1425-06.2006 [PubMed] [Cross Ref]
- Pollen D. A., Ronner S. F. (1982). Spatial computation performed by simple and complex cells in the visual cortex of the cat. Vision Res. 22, 101–11810.1016/0042-6989(82)90172-9 [PubMed] [Cross Ref]
- Porr B., Wörgötter F. (2003). Isotropic sequence order learning. Neural Comput. 15, 831–86410.1162/08997660360581921 [PubMed] [Cross Ref]
- Roberts P. D. (1999). Computational consequences of temporally asymmetric learning rules: I. Differential Hebbian learning. J. Comput. Neurosci. 7, 235–24610.1023/A:1008910918445 [PubMed] [Cross Ref]
- Roberts P. D. (2004). Recurrent biological neuronal networks: the weak and noisy limit. Phys. Rev. E 69, 031910.10.1103/PhysRevE.69.031910 [PubMed] [Cross Ref]
- Roberts P. D. (2000). Dynamics of temporal learning rules. Phys. Rev. E 62, 4077–408210.1103/PhysRevE.62.4077 [PubMed] [Cross Ref]
- Saudargiene A., Porr B., Wörgötter F. (2004). How the shape of pre- and postsynaptic signals can influence STDP: a biophysical model. Neural Comput. 16, 595–62610.1162/089976604772744929 [PubMed] [Cross Ref]
- Shadlen M. N., Newsome W. T. (1998). The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J. Neurosci. 18, 3870–3896 [PubMed]
- Song S., Miller K., Abbott L. (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 3, 919–92610.1038/78829 [PubMed] [Cross Ref]
- Tamosiunaite M., Porr B., Wörgötter F. (2007). Self-influencing synaptic plasticity: recurrent changes of synaptic weights can lead to specific functional properties. J. Comput. Neurosci. 23, 113–12710.1007/s10827-007-0021-2 [PubMed] [Cross Ref]
- Turrigiano G. G., Nelson S. B. (2004). Homeostatic plasticity in the developing nervous system. Nat. Rev. Neurosci. 5, 97–10710.1038/nrn1327 [PubMed] [Cross Ref]
- van Rossum M. C. W., Bi G. Q., Turrigiano G. G. (2000). Stable Hebbian learning from spike timing-dependent plasticity. J. Neurosci. 20, 8812–8821 [PubMed]
- Wörgötter F., Nelle E., Li B., Wang L., Diao Y. (1998). A possible basic cortical microcircuit called “cascaded inhibition.” results from cortical network models and recording experiments from striate simple cells. Exp. Brain Res. 122, 318–332 [PubMed]
- Wörgötter F., Porr B. (2004). Temporal sequence learning, prediction, and control – a review of different models and their relation to biological mechanisms. Neural Comput. 17, 245–31910.1162/0899766053011555 [PubMed] [Cross Ref]

Articles from Frontiers in Computational Neuroscience are provided here courtesy of **Frontiers Media SA**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |