Home | About | Journals | Submit | Contact Us | Français |

**|**Front Comput Neurosci**|**v.4; 2010**|**PMC2928668

Formats

Article sections

Authors

Related links

Front Comput Neurosci. 2010; 4: 22.

PMCID: PMC2928668

Edited by: Wulfram Gerstner, Ecole Polytechnique Fédérale de Lausanne, Switzerland

Reviewed by: Markus Diesmann, RIKEN Brain Science Institute, Japan; Wulfram Gerstner, Ecole Polytechnique Fédérale de Lausanne, Switzerland

*Correspondence: Jean-Pascal Pfister, Computational and Biological Learning Lab, Engineering Department, University of Cambridge, Trumpington Street, CB2 1PZ Cambridge, UK. e-mail: ku.ca.mac.gne@retsifp.lacsap-naej

Received 2010 February 2; Accepted 2010 June 18.

Copyright © 2010 Pfister and Tass.

This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

This article has been cited by other articles in PMC.

Highly synchronized neural networks can be the source of various pathologies such as Parkinson's disease or essential tremor. Therefore, it is crucial to better understand the dynamics of such networks and the conditions under which a high level of synchronization can be observed. One of the key factors that influences the level of synchronization is the type of learning rule that governs synaptic plasticity. Most of the existing work on synchronization in recurrent networks with synaptic plasticity are based on numerical simulations and there is a clear lack of a theoretical framework for studying the effects of various synaptic plasticity rules. In this paper we derive analytically the conditions for spike-timing dependent plasticity (STDP) to lead a network into a synchronized or a desynchronized state. We also show that under appropriate conditions bistability occurs in recurrent networks governed by STDP. Indeed, a pathological regime with strong connections and therefore strong synchronized activity, as well as a physiological regime with weaker connections and lower levels of synchronization are found to coexist. Furthermore, we show that with appropriate stimulation, the network dynamics can be pushed to the low synchronization stable state. This type of therapeutical stimulation is very different from the existing high-frequency stimulation for deep brain stimulation since once the stimulation is stopped the network stays in the low synchronization regime.

High level of synchrony in neural tissue can be the cause of several diseases. For example, Parkinson's disease is characterized by high level of neuronal synchrony in the thalamus and in the basal ganglia. In opposition, the same neural tissues in healthy conditions have been shown to fire in an asynchronous way (Nini et al., 1995). The Parkinsonian resting tremor (3–6Hz) is not only related to the subcortical oscillations (Pare et al., 1990), but has been recently shown to be driven by those oscillations (Smirnov et al., 2008; Tass et al., 2010). In addition, the extent of akinesia and rigidity is closely related to synchronized neuronal activity in the beta band (8–30Hz) (Kuhn et al., 2006).

A standard treatment for patients with Parkinson's disease is to chronically implant an electrode typically in the sub-thalamic nucleus (STN) and perform a high-frequency (>100Hz) deep brain stimulation (HF-DBS) (Benabid et al., 1991, 2009). Although HF-DBS delivered to the STN is able to reduce tremor, akinesia, and rigidity, this type of stimulation, which was found empirically, does not cure the cause of the tremor. It merely silences the targeted neurons during the stimulation but as soon as the stimulation is turned off, the tremor restarts instantaneously, whereas akinesia and rigidity revert back within minutes to half an hour (Temperli et al., 2003).

Recently, other types of stimulation (Tass, 1999, 2003; Tass and Majtanik, 2006; Hauptmann and Tass, 2007, 2009; Tass and Hauptmann, 2007) have been proposed which are not designed to ‘silence’ the neurons during the stimulation but rather reshape the network architecture in a way that long-lasting desynchronizing effects occur which outlast the offset of stimulation. This approach relies on two properties or requirements of the neural tissue. First, the network dynamics must elicit bistability such that both the synchronized and the desynchronized state are stable (see Figure Figure1A).1A). Second, there must exist a stimulation that (momentarily) destroys the stability of the synchronous state such that the network is driven into the desynchronized stable state (see Figure Figure11B).

Unfortunately, most of the work that follows this idea of therapeutical stimulation is based on numerical simulations (Tass and Majtanik, 2006; Hauptmann and Tass, 2007, 2009; Tass and Hauptmann, 2007) and only very little has been done analytically (Maistrenko et al., 2007) in order to understand what are the crucial parameters that lead to the two above requirements. Here we consider a recurrent network model that is sufficiently simple to be treated analytically but still captures the desired properties.

Concretely, we consider in this paper a network dynamics that acts on two different time scales. On a short-time scale, neurons integrate the inputs from neighboring neurons and fire periodically. On a longer time scale, synapses can change their strength and hence influence the overall level of synchrony. In principle the above mentioned requirement on bistability can be present in either the neuronal dynamics (short-time scale) or the synaptic dynamics (longer time scale). Since we are aiming for long-lasting effects, we focus on the bistability in the synaptic dynamics. More precisely, we consider a synaptic plasticity learning rule that depends on the precise timing of the action potentials of the pre- and postsynaptic neurons. This form of plasticity called spike-timing dependent plasticity (STDP) (Gerstner et al., 1996b; Markram et al., 1997; Bi and Poo, 1998) enhances synapses for which the presynaptic spike precedes the postsynaptic one and decreases the synaptic strength when the timing is reversed (see Figure Figure33A).

Regarding the stimulation itself, several candidates have been proposed. For example, it has been shown that a patterned, multi-site and timed-delayed stimulation termed as coordinated reset (CR) stimulation (Tass, 2003) which forces sub-populations to fire in a precise sequence generates transiently a nearly uniform phase distribution. In fact, long-lasting desynchronizing effects of CR stimulation have been verified in rat hippocampal slice rendered epileptic by magnesium withdrawal (Tass et al., 2009).

In the present study, we do not focus on a precise stimulation but rather assume that there exists a stimulation that desynchronizes transiently a given population. Furthermore a given stimulation rarely affects the whole population which fires in a pathological way but only part of it. So we consider two populations. An input population which is affected by the stimulation (see Figure Figure2A)2A) and a recurrent population with plastic synapses which is driven by the input population.

In order to calculate the expected weight change in the recurrent network, we need to get an expression for the spiking covariance of this recurrent population. This technical step has been achieved by Hawkes (1971) and used by Gilson et al. (2009a,b) in the context of recurrent network with STDP. The present study extends the work of Gilson et al. (2009b) to the case of oscillatory inputs and highlights the conditions for which the network elicits bistability.

Let us consider an input population consisting of *M* neurons and a recurrent population containing *N* neurons (see Figure Figure2A).2A). Let *x*(*t*)=(*x*_{1}(*t*),…,*x _{M}*(

Let us assume that the input spike trains *x*(*t*) have instantaneous firing rates ρ(*t*)=(ρ_{1}(*t*),…,ρ* _{M}*(

$$\Delta C(\tau )=\frac{1}{T}{\displaystyle {\int}_{0}^{T}{\langle x(t){x}^{T}(t-\tau )\rangle}_{x(t),x(t-\tau )}dt-\overline{\rho}{\overline{\rho}}^{T}}$$

(1)

where $\overline{\rho}={T}^{-1}{\int}_{0}^{T}\rho (t)\text{\hspace{0.17em}}dt$ is the mean firing rate averaged over one period *T*. Note that the input covariance matrix Δ*C*(τ) has an atomic discontinuity at τ=0, i.e., $\Delta C(0)=\text{diag}(\overline{\rho})\text{\hspace{0.17em}}\delta \text{\hspace{0.17em}}(0).$ Let $u(t)=({u}_{1}(t),\dots ,{u}_{N}(t))$ denote the membrane potential of all the neurons in the recurrent population (see the Spike-Response Model in Gerstner and Kistler, 2002):

$$u(t)=h{x}^{\mathbf{\text{\u2208}}}(t)+w{y}^{\mathbf{\text{\u2208}}}(t)$$

(2)

where the superscript denotes a convolution^{1} with the EPSP kernel (*s*), i.e., ${x}^{\mathbf{\text{\u2208}}}(t)={\int}_{0}^{\infty}\mathbf{\text{\u2208}}(s)\text{\hspace{0.17em}}x\text{\hspace{0.17em}}(t-s)\text{\hspace{0.17em}}ds$ and ${y}^{\mathbf{\text{\u2208}}}(t)={\int}_{0}^{\infty}\mathbf{\text{\u2208}}(s)\text{\hspace{0.17em}}y\text{\hspace{0.17em}}(t-s)\text{\hspace{0.17em}}ds.$ Without loss of generality, we will assume that the EPSP kernel is such that ${\int}_{0}^{\infty}\mathbf{\text{\u2208}}(s)\text{\hspace{0.17em}}ds=1.$ More precisely, we take (*s*)=(τ−τ* _{s}*)

In this model, output spikes are generated stochastically, i.e., the higher the membrane potential the more likely a spike will be emitted. Formally a spike is generated in neuron *i* at time *t* with an instantaneous probability density given by $g({u}_{i}(t))={\langle {y}_{i}(t)\rangle}_{{y}_{i}(t)|{x}^{t},{y}^{t}}$

$$g({u}_{i}(t))\simeq g({\overline{u}}_{i})+{g}^{\prime}({\overline{u}}_{i})({u}_{i}(t)-{\overline{u}}_{i})$$

(3)

where ${g}^{\prime}({\overline{u}}_{i})=dg(u)/du{|}_{u={\overline{u}}_{i}}$ with ${\overline{u}}_{i}={T}^{-1}{\int}_{0}^{T}{\langle {u}_{i}(t)\rangle}_{{x}^{t},{y}^{t}}dt$ denoting the expected membrane potential of neuron *i* averaged over a period *T*, and $\overline{u}=({\overline{u}}_{1},\dots ,{\overline{u}}_{N})$ its vectorial representation. The expected output firing rates $\nu (t)={\langle y(t)\rangle}_{y(t)}={\langle {\langle y(t)\rangle}_{y(t)|{x}^{t},{y}^{t}}\rangle}_{{x}^{t},{y}^{t}}$ at time *t* is therefore given by

$$\nu (t)=g(\overline{u})+{D}^{\prime}\left(h{\rho}^{\mathbf{\text{\u2208}}}(t)+w{\nu}^{\mathbf{\text{\u2208}}}(t)-\overline{u}\right)$$

(4)

where *D*′=diag(*g*′(*ū*)). Note that from Eq. 2, the averaged expected membrane potential *ū* can be expressed as $\overline{u}=h\overline{\rho}+w\overline{\nu},$ where $\overline{\nu}={T}^{-1}{\int}_{0}^{T}\nu (t)dt=g(\overline{u})$ is the averaged output firing rate. Combining those two expressions, we get a self-consistent equation for the averaged output firing rate $\overline{\nu}:$

$$\overline{\nu}=g\left(h\overline{\rho}+w\overline{\nu}\right)$$

(5)

Note that if the transfer function *g*(*u*) is linear, i.e., *g*(*u*)=*u*, we have $\overline{\nu}={(\mathrm{\U0001d7d9}-w)}^{-1}h\overline{\rho}.$ Let $\Delta \nu (t)=\nu (t)-\overline{\nu}$ denote the amplitude of the oscillation in the recurrent population:

$$\Delta \nu (t)={D}^{\prime}\left(h\Delta {\rho}^{\mathbf{\text{\u2208}}}(t)+w\Delta {\nu}^{\mathbf{\text{\u2208}}}(t)\right)$$

(6)

where $\Delta \rho (t)=\rho (t)-\overline{\rho}$ denotes the amplitude of the input oscillation and $\Delta {\rho}^{\mathbf{\text{\u2208}}}(t)={\int}_{0}^{\infty}\mathbf{\text{\u2208}}(s)\text{\hspace{0.17em}}\rho (t-s)\text{\hspace{0.17em}}ds.$ Since Δν(*t*) appears on the l.h.s of Eq. 6 and in a convolved form on the r.h.s, we can Fourier transform this equation in order to express explicitly the amplitude $\Delta \tilde{\nu}$ of the oscillation in the recurrent population as a function of the angular frequency *ω*:

$$\Delta \tilde{\nu}(\omega )={\left(\mathrm{\U0001d7d9}-\tilde{J}(\omega )\right)}^{-1}\tilde{K}(\omega )\Delta \tilde{\rho}(\omega )$$

(7)

where denotes the *N*×*N* identity matrix, $\tilde{J}(\omega )=\tilde{\mathbf{\text{\u2208}}}(\omega ){D}^{\prime}w,$ $\tilde{K}(\omega )=\tilde{\mathbf{\text{\u2208}}}(\omega ){D}^{\prime}h.$ The tilde symbol $(\u02dc)$ denotes the Fourier transform, i.e., $\Delta \tilde{\nu}(\omega )={\int}_{-\infty}^{\infty}\mathrm{exp}(-i\omega t)\Delta \nu (t)dt.$ The recurrent population is said to be highly synchronous if $\Delta \tilde{\nu}({\omega}_{0})$ is large where *ω*_{0}=2π/*T*. It is well known that coupled Kuramoto oscillators do synchronize if the coupling exceeds a certain threshold (Kuramoto, 1984)^{2}. Here, because we do not consider phase oscillators (such as the Kuramoto oscillators) as neuronal models, this result does not apply directly. However, as we can see in Eq. 7 the stronger the recurrent weights *w*, the bigger the amplitude of the output oscillation $\Delta \tilde{\nu}({\omega}_{0}).$ Let Δ*Q*(τ) be the spiking covariance matrix of the recurrent population:

$$\Delta Q(\tau )=\frac{1}{T}{{\displaystyle \underset{0}{\overset{T}{\int}}\langle y(t){y}^{T}(t-\tau )\rangle}}_{y(t),y(t-\tau )}dt-\overline{\nu}\text{\hspace{0.17em}}{\overline{\nu}}^{T}$$

(8)

Note that Δ*Q*(τ) contains an atomic discontinuity at τ=0, i.e., Δ*Q*(0)=*D*δ(0) with $D=\text{diag}(\overline{\nu}).$ As can be seen in “Spiking Covariance Matrices” in Appendix, it is easier to calculate this covariance matrix in the frequency domain, i.e., $\Delta \tilde{Q}(\omega )={\int}_{-\infty}^{\infty}\mathrm{exp}(-i\omega t)\Delta Q(t)\text{\hspace{0.17em}}dt$ rather than in the time domain. So the Fourier transform of this output covariance matrix yields

$$\Delta \tilde{Q}(\omega )={\left(\mathrm{\U0001d7d9}-\tilde{J}(\omega )\right)}^{-1}\left\{\tilde{K}(\omega )\Delta \tilde{C}(\omega ){\tilde{K}}^{T}(-\omega )+D\right\}{\left(\mathrm{\U0001d7d9}-{\tilde{J}}^{T}(-\omega )\right)}^{-1}$$

(9)

where $\Delta \tilde{C}(\omega )={\int}_{-\infty}^{\infty}\mathrm{exp}(-i\omega t)\Delta C(t)\text{\hspace{0.17em}}dt$ denotes the Fourier transform of the input covariance matrix.

In the previous section, we assumed that all the weights were constant in order to calculate the output covariance matrix. Let us now allow the recurrent weights *w* to change very slowly (w.r.t. the neuronal dynamics) and keep the input weights *h* fixed. Because of this separation of time scales, we can still use the results derived so far.

In the same spirit as Kempter et al. (1999), Gerstner and Kistler (2002), Gilson et al. (2009b), we can express the weight change as a Volterra expansion of both the pre- and postsynaptic spike trains. If we keep terms up to second order we get for all *i*≠*j*:

$$\begin{array}{l}{\dot{w}}_{ij}={a}^{\text{pre}}\left({w}_{ij}\right){y}_{j}(t)+{a}^{\text{post}}\left({w}_{ij}\right){y}_{i}(t)\\ \text{\hspace{1em}\hspace{1em}}+{y}_{i}(t){\displaystyle \underset{0}{\overset{\infty}{\int}}W\left(s,{w}_{ij}\right){y}_{j}(t-s)\text{\hspace{0.17em}}ds+{y}_{j}(t)}{\displaystyle \underset{0}{\overset{\infty}{\int}}W\left(-s,{w}_{ij}\right){y}_{i}(t-s)ds}\end{array}$$

(10)

Since we do not allow for self-coupling, we will set ${\dot{w}}_{ii}=0.$ *a*^{pre}(*w*) (and *a*^{post}(*w*)) denote the weight change induced by a single presynaptic (resp. postsynaptic) spike. *W*(*s*,*w*) denotes the STDP learning window (see Figure Figure3A)3A) and $\tilde{W}(\omega ,w)$ its Fourier transform (see Figure Figure33B):

$$W(t,w)=\{\begin{array}{ll}{A}_{+}(w){e}^{-t/\tau +}\hfill & \text{if\hspace{0.17em}}t>0\hfill \\ \hfill \text{0}\hfill & \text{if\hspace{0.17em}}t=0\hfill \\ -{A}_{-}(w){e}^{t/\tau -}\hfill & \text{if\hspace{0.17em}}t<0\hfill \end{array}\text{\hspace{0.17em}\hspace{0.17em}}\Rightarrow \text{\hspace{0.17em}\hspace{0.17em}}\tilde{W}(\omega ,w)=\frac{{A}_{+}(w){\tau}_{+}}{1+i\omega {\tau}_{+}}-\frac{{A}_{-}(w){\tau}_{-}}{1-i\omega {\tau}_{-}}$$

(11)

Consistently with the modeling literature on STDP (van Rossum et al., 2000; Gütig et al., 2003; Morrison et al., 2007, 2008) both the potentiation factor ${A}_{+}$ and the depression factor ${A}_{-}(w)$ are assumed to depend on the weight. Here, the precise dependence on the weights corresponds to the one used by Gütig et al. (2003):

$${A}_{+}(w)={A}_{+}^{0}{\left(w/{w}_{\mathrm{max}}-1\right)}^{\mu}$$

(12)

$${A}_{-}(w)={A}_{-}^{0}{\left(w/{w}_{\mathrm{max}}\right)}^{\mu}$$

(13)

where the parameter $\mu \mathbf{\text{\u2208}}[0,1]$ controls the dependence upon the weight. If μ→0, the rule is additive and therefore independent on the weight *w*. Conversely, if μ→1, the rule becomes multiplicative. Unless specified otherwise, we set μ=0.05. In addition to this learning rule, hard bounds for the weights (0≤*w _{ij}*≤

$$\begin{array}{l}{\dot{w}}_{ij}\simeq {a}^{\text{pre}}\left({w}_{ij}\right){\overline{\nu}}_{j}+{a}^{\text{post}}\left({w}_{ij}\right){\overline{\nu}}_{i}\\ \text{\hspace{1em}\hspace{0.17em}}+\overline{W}\left({w}_{ij}\right){\overline{\nu}}_{i}{\overline{\nu}}_{j}+{\displaystyle {\int}_{-\infty}^{\infty}W\left(s,{w}_{ij}\right)\Delta {Q}_{ij}(s)\text{\hspace{0.17em}}ds}\end{array}$$

(14)

where $\overline{W}({w}_{ij})={\int}_{-\infty}^{\infty}W(s,{w}_{ij})\text{\hspace{0.17em}}ds$ denotes the area under the STDP curve for a given weight *w _{ij}*. Since it is easier to express this covariance matrix in the frequency domain we can rewrite Eq. 14 (in matrix form) as

$$\begin{array}{c}\dot{w}=\varphi \{{a}^{\text{pre}}(w)\circ \left(1{\overline{\nu}}^{T}\right)+{a}^{\text{post}}(w)\circ \left(\overline{\nu}{1}^{T}\right)+\overline{W}(w)\circ \overline{\nu}\text{\hspace{0.17em}}{\overline{\nu}}^{T}\\ +\frac{1}{2\pi}{\displaystyle {\int}_{-\infty}^{\infty}\tilde{W}(-\omega ,w)\circ \Delta \tilde{Q}(\omega )\text{\hspace{0.17em}}d\omega}\}\end{array}$$

(15)

where $\Delta \tilde{Q}(\omega )$ is given by Eq. 9 and the ‘’ operator denotes the pointwise matrix multiplication (also known as the Hadamard product). The operator ϕ sets to zero the diagonal elements and keep the off-diagonal elements unchanged. This implements the fact that we are not allowing autapses. **1**=(1,…,1) is a vector containing *N* ones.

Let us now assume that the input population is exciting the recurrent population with independent Poisson spike trains of sinusoidal intensity $\rho (t)=\overline{\rho}+\Delta {\rho}_{0}\mathrm{sin}({\omega}_{0}t)$ where *ω*_{0}=2π/*T* is the angular frequency. Because of the independence assumption, the input correlation matrix yields

$$\Delta C(\tau )=\frac{1}{2}\mathrm{cos}\left({\omega}_{0}\tau \right)\Delta {\rho}_{0}\Delta {\rho}_{0}^{T}+\text{diag}(\overline{\rho})\text{\hspace{0.17em}}\delta (\tau )$$

(16)

and its Fourier transform yields

$$\Delta \tilde{C}(\omega )=\frac{\pi}{2}\left(\delta \left(\omega \text{\hspace{0.17em}}-\text{\hspace{0.17em}}{\omega}_{0}\right)+\delta \left(\omega \text{\hspace{0.17em}}+\text{\hspace{0.17em}}{\omega}_{0}\right)\right)\Delta {\rho}_{0}\Delta {\rho}_{0}^{T}+\text{diag}(\overline{\rho})$$

(17)

By using this expression of the input correlation, we get an explicit expression for the output covariance matrix $\Delta \tilde{Q}(\omega )$ (see Eq. 9) which can be approximated as:

$$\Delta \tilde{Q}(\omega )\simeq \frac{\pi}{2}\Delta \tilde{\nu}(\omega )\Delta \tilde{\nu}{(-\omega )}^{T}\left\{\delta \left(\omega -{\omega}_{0}\right)+\delta \left(\omega +{\omega}_{0}\right)\right\}$$

(18)

with $\Delta \tilde{\nu}(\omega )={(\mathrm{\U0001d7d9}-\tilde{J}(\omega ))}^{-1}\tilde{K}(\omega )\Delta {\rho}_{0};$ see Eq. 7. This approximation is valid in the limit of large networks (*M*1, *N*1) where the terms involving diagonal matrices can be neglected. Indeed the term which depends on $\text{diag}(\overline{\rho})$ scales as *M*^{−1} and the term which depends on $D=\text{diag}(\overline{\nu})$ scales as *N*^{−1} if the output firing rate $\overline{\nu}$ is kept constant. For a detailed discussion, see Kempter et al. (1999), Gilson et al. (2009b). By inserting Eq. 18 into Eq.15, we get:

$$\begin{array}{c}\dot{w}\simeq \varphi \{{a}^{\text{pre}}1{\overline{\nu}}^{T}+{a}^{\text{post}}\overline{\nu}{1}^{T}+\overline{W}(w)\overline{\nu}\text{\hspace{0.17em}}{\overline{\nu}}^{T}\\ +\frac{1}{2}\Re \left(\tilde{W}\left(-{\omega}_{0},w\right)\circ \Delta \tilde{\nu}\left({\omega}_{0}\right)\Delta {\tilde{\nu}}^{T}\left(-{\omega}_{0}\right)\right)\}\text{\hspace{0.17em}}\end{array}$$

(19)

where (*x*) takes the real part of *x*. Here, we assumed that both *a*^{pre} and *a*^{post} are independent of the recurrent weights *w*. Furthermore, if *g*(*u*)=*u*, then the diagonal matrix *D*′= (which is present in $\tilde{J}$ and $\tilde{K}$) is independent of the recurrent weights *w*.

So far, we derived analytically the neuronal dynamics of a recurrent population that is stimulated by an oscillatory input. Furthermore we calculated the synaptic dynamics of such a recurrent network when synapses are governed by STDP. This rich synaptic dynamics has several interesting properties that are relevant for therapeutical stimulation and that we describe below.

Firstly, under some conditions that will be detailed below, the synaptic dynamics can elicit bistability (see Figures Figures4A,B).4A,B). Concretely, if synapses are initialized above a critical value *w** , they will grow and reach the upper bound *w*_{max}. Conversely, if they are initialized below *w**, they will decrease down to their minimal value. As a consequence, because the amplitudes of the oscillations in the recurrent population depend on the strength of the recurrent weights (see Eqs 7 and 22), then this bistability at the level of the weights implies a similar bistability at the level of the oscillation amplitude of the recurrent population (see Figure Figure44C).

Secondly, the presence of the desynchronizing stimulation, which is modeled by setting the amplitude of the oscillatory input Δρ_{0} to 0, removes the bistability and pushes all the weights to their minimal value (see Figures Figures4A,B).4A,B). Once the stimulation is removed the recurrent weights stay at their minimum value because it is a fixed point of the dynamics.

Because the analysis of the non-linear dynamical system given by Eq. 19 can be challenging, let us consider the dynamics of the averaged recurrent weight ${w}_{\text{av}}={N}^{-2}{\sum}_{i,j}{w}_{ij}$ in the homogeneous case.

More precisely, we assume here that all the mean input firing rates ${\overline{\rho}}_{j}$ are close to their averaged value ${\overline{\rho}}_{\text{av}}={M}^{-1}{\sum}_{j}{\overline{\rho}}_{j},$ and that the input weights *h _{ij}* are close to their averaged value ${h}_{\text{av}}={(\text{MN})}^{-1}{\sum}_{ij}{h}_{ij}.$ If we further assume that the initial values of the recurrent weights

$$\begin{array}{l}{\dot{w}}_{\text{av}}\simeq \left({a}^{\text{pre}}+{a}^{\text{post}}\right){\overline{\nu}}_{\text{av}}+\overline{W}\left({w}_{\text{av}}\right){\overline{\nu}}_{\text{av}}^{\text{2}}\\ \text{\hspace{1em}\hspace{0.17em}\hspace{0.17em}}+\frac{1}{2}\Re \left(\tilde{W}\left({\omega}_{0},{w}_{\text{av}}\right)\right)|\Delta {\tilde{\nu}}_{\text{av}}\left({\omega}_{0}\right){|}^{2}\end{array}$$

(20)

If the transfer function is linear, i.e., *g*(*u*)=*u*, the averaged output firing rate ${\overline{\nu}}_{\text{av}}={N}^{-1}{\sum}_{j}{\overline{\nu}}_{j}$ and the oscillation amplitude of the recurrent population $\Delta {\tilde{\nu}}_{\text{av}}$ can be approximated as:

$${\overline{\nu}}_{\text{av}}\simeq {\left(1-N{w}_{\text{av}}\right)}^{-1}M{h}_{\text{av}}{\overline{\rho}}_{\text{av}}$$

(21)

$$\Delta {\tilde{\nu}}_{\text{av}}\simeq {\left(1-N\tilde{\mathbf{\text{\u2208}}}\left({\omega}_{0}\right){w}_{\text{av}}\right)}^{-1}\tilde{\mathbf{\text{\u2208}}}\left({\omega}_{0}\right)M{h}_{\text{av}}\Delta {\rho}_{\text{av}}$$

(22)

with $\Delta {\rho}_{\text{av}}={M}^{-1}{\sum}_{j}\Delta {\rho}_{0j}$ The dynamics of the averaged recurrent weight *w*_{av} given by Eq. 20 is remarkably consistent with numerical simulations performed with spiking neurons (see Figures Figures4A,B).4A,B). The small discrepancy between the numerics and analytics can be attributed to two factors. First, Eq. 20 assumes the limit of large networks, i.e., *N*, *M*→∞ whereas in the numerical simulations, we used *N*=*M*=100. Secondly, Eq. 20 is valid in the limit of small recurrent weights, i.e., *w*_{av}1/*N* (here we used *Nw*_{max}=0.75). As we can see on Figure Figure4B,4B, the smaller averaged weights *w*_{av}, the better the correspondence between the numerical simulation and the analytical calculations. Even, for large recurrent weights (*w*_{av}*w*_{max}), the discrepancy between numerics and analytics is remarkably small.

So far, we discussed a scenario in which the stimulation effectively acted as a therapeutical stimulation because it shifted the network state from the highly synchronous (and therefore pathological) state to the low synchronous state. In order to have this property, the network must satisfy two conditions:

- C1. Bistability condition. In the absence of stimulation, the network must elicit bistability such that both the synchronized and the desynchronized state are stable. Formally this gives$$\exists {w}_{\text{av}}^{\ast}\mathbf{\text{\u2208}}\left[0,{w}_{\mathrm{max}}\right]\text{\hspace{0.17em}such\hspace{0.17em}that\hspace{0.17em}}F\left({w}_{\text{av}}^{\ast}\right)=0\text{\hspace{0.17em}and\hspace{0.17em}}F\prime \left({w}_{\text{av}}^{\ast}\right)>0$$(C1)
- C2. Desynchronizing stimulation condition. In the presence of a desynchronizing stimulation, the highly synchronous state looses its stability and the network is driven to the low synchronous state. This can be expressed as:$$F\left({w}_{\text{av}}\right)<0\text{\hspace{1em}}\forall {w}_{\text{av}}\mathbf{\text{\u2208}}[0,{w}_{\mathrm{max}}]\text{\hspace{1em}with\hspace{1em}}\Delta {\rho}_{\text{av}}=0$$(C2)

In order to keep the discussion reasonably simple, let us make several assumptions. First, let us consider the near-additive learning rule i.e., μ0. Indeed in the case of a multiplicative learning rule (i.e., μ0), the potentiation factor *A*_{+}(*w*) and the depressing factor *A*_{−}(*w*) have a strong stabilization effect and therefore no bistability can be expected; see Figure Figure5A5A with μ=0.5. This is consistent with the findings of Gütig et al. (2003) who showed that there is a symmetry breaking in feedforward networks with weight-dependent STDP for μ bigger than a critical value.

With this additivity assumption and the linear transfer function assumption (*g*(*u*)=*u*), the averaged recurrent weight dynamics can be simply expressed from Eq. 20 combined with Eqs 21 and 22:

$${\dot{w}}_{\text{av}}\simeq \frac{{\alpha}_{1}}{1-N{w}_{\text{av}}}+\frac{{\alpha}_{2}}{{\left(1-N{w}_{\text{av}}\right)}^{2}}+\frac{{\alpha}_{3}}{{\left|1-\tilde{\mathbf{\text{\u2208}}}\left({\omega}_{0}\right)N{w}_{\text{av}}\right|}^{2}}$$

(23)

where ${\alpha}_{1}=({a}^{\text{pre}}+{a}^{\text{post}})M{h}_{\text{av}}{\overline{\rho}}_{\text{av}},$ ${\alpha}_{2}=\overline{W}{M}^{2}{h}_{\text{av}}^{2}{\overline{\rho}}_{\text{av}}^{2}$ and ${\alpha}_{3}=|\tilde{\mathbf{\text{\u2208}}}({\omega}_{0}){|}^{2}{M}^{2}{h}_{\text{av}}^{2}\Delta {\rho}_{\text{av}}^{2}\Re (\tilde{W}({\omega}_{0}))/2$ are constant and do not depend on the averaged recurrent weight *w*_{av}.

Another greatly simplifying assumption is to consider the low oscillatory frequency regime, i.e., *ω*_{0}τ1 which gives $\tilde{\mathbf{\text{\u2208}}}({\omega}_{0})\simeq 1.$

With these assumptions, the fixed point ${w}_{\text{av}}^{\ast}$ expressed in the bistability condition C1 yields

$${w}_{\text{av}}^{\ast}\simeq \frac{1}{N}\left(1+\frac{{\alpha}_{2}+{\alpha}_{3}}{{\alpha}_{1}}\right)$$

(24)

Because this fixed point has to be between 0 and *w*_{max}, we have $-1<({\alpha}_{2}+{\alpha}_{3})/{\alpha}_{1}<N{w}_{\mathrm{max}}-1,$ or equivalently

$$-1<\left(\overline{W}+\frac{1}{2}\Re \left(\tilde{W}\left({\omega}_{0}\right)\right)\frac{\Delta {\rho}_{\text{av}}^{2}}{{\overline{\rho}}_{\text{av}}^{2}}\right)\frac{M{h}_{\text{av}}{\overline{\rho}}_{\text{av}}}{{a}^{\text{pre}}+{a}^{\text{post}}}<N{w}_{\mathrm{max}}-1$$

(25)

If we push the low-frequency assumption further such that *ω*_{0}τ_{+}1 and *ω*_{0}τ_{−}1, we have $\overline{W}\simeq \Re (\tilde{W}({\omega}_{0}))$ (see Figure Figure3C).3C). In this case, Eq. 25 yields a simple necessary condition: *a*^{pre}+*a*^{post} and the integral of the learning window $\overline{W}$ must be of opposite sign:

$$\mathrm{sgn}\left({a}^{\text{pre}}+{a}^{\text{post}}\right)=-\mathrm{sgn}\left(\overline{W}\right)$$

(26)

Finally, the fact that the fixed point ${w}_{\text{av}}^{\ast}$ must be unstable (see condition C1), we have $F\prime ({w}_{\text{av}}^{\ast})>0$ which gives ${\alpha}_{1}<0.$ By using the expression of ${w}_{\text{av}}^{\ast}$ given in Eq. 24 we have α_{2}+α_{3}>0 and therefore with Eq. 26, we get:

$$\overline{W}>0\text{\hspace{1em}and\hspace{1em}}{a}^{\text{pre}}+{a}^{\text{post}}<0$$

(27)

which is consistent with the parameters used in Figures Figures33 and and4.4. Note that in Figure Figure5B,5B, we violated this condition and as a consequence the bistability property is lost.

In the presence of the desynchronization stimulation (i.e., Δρ_{av}=0), the condition C2 implies that α_{1}(1−*Nw*_{av})+α_{2}<0. Because α_{1}<0 (see Eq. 27), this condition is satisfied for all *w*_{av} if α_{1}(1−*Nw*_{max})+α_{2}<0. By introducing the values for α_{1} and α_{2}, we get:

$${a}^{\text{pre}}+{a}^{\text{post}}<-\overline{W}\frac{M{h}_{\text{av}}{\rho}_{\text{av}}}{1-N{w}_{\mathrm{max}}}$$

(28)

By combining the conditions C1 and C2, α_{1} has to obey

$$-\frac{{\alpha}_{2}+{\alpha}_{3}}{1-N{w}_{\mathrm{max}}}<{\alpha}_{1}<\mathrm{min}\left(-\frac{{\alpha}_{2}}{1-N{w}_{\mathrm{max}}},-\left({\alpha}_{2}+{\alpha}_{3}\right)\right)$$

(29)

In summary, in order to satisfy both conditions C1 and C2, the learning rule must be nearly additive (μ0), the learning window must be overall positive $(\overline{W}>0)$ and the sum of the coefficients tuning the effect of single spikes (*a*^{pre} and *a*^{post}) must be negative with upper and lower bounds set by Eq. 29.

Note that the bistability conditions derived above are valid for the averaged recurrent weight and do not apply for the individual weights of the recurrent population. As a consequence, this bistability condition does not necessarily imply a bimodal weight distribution. Indeed, if we consider the same set-up as in Figure Figure4,4, but draw the initial weights from a uniform distribution, between 0 and *w*_{max}, then all the weights will eventually settle into a unimodal distribution, around *w*=0 (data not shown).

A bimodal weight distribution has been shown to occur in recurrent networks with additive STDP learning rule (Gilson et al., 2009b) favoring neurons that receive correlated inputs. For the sake of simplicity we considered in the simulations identical (and uncorrelated) inputs but the recurrent weight dynamics described by Eq. 15 remains valid for any type of inputs.

In this paper, we developed a model of a recurrent network with plastic synapses that is sufficiently simple to be treated analytically but that still yields the desired properties. In particular, we showed that when (a) the STDP learning rule is near additive and (b) the learning window is overall positive and (c) the terms in the learning rule involving single spikes (*a*^{pre}+*a*^{post}) have a depressing effect (within some bounds), then a desynchronizing stimulation favors long-term depression in the recurrent synapses and therefore drives the network from a highly synchronous state to a desynchronous state. In this way, our study confirms previously performed simulation studies (Tass and Majtanik, 2006; Hauptmann and Tass, 2007, 2009; Tass and Hauptmann, 2007) and, in particular, contributes to a theoretically sound foundation for the development of desynchronizing and, especially, anti-kindling brain stimulation techniques.

The concept of anti-kindling, i.e., of an unlearning of pathological synchrony and connectivity by desynchronizing stimulation, has been introduced by Tass and Majtanik (2006). For this, STDP was incorporated into a generic network of phase oscillators, and both kindling and anti-kindling processes were studied numerically. To approach a more microscopic level of description, a network of bursting neurons has been introduced as a simple model for an oscillatory population in the STN (Tass and Hauptmann, 2006, 2007; Hauptmann and Tass, 2007). With this model different aspects of kindling and anti-kindling processes have been studied numerically, such as the effect of inhibition vs. excitation (Hauptmann and Tass, 2007), the impact of weak and particularly short stimuli (Tass and Hauptmann, 2006), post-stimulus transients and cumulative stimulation effects (Hauptmann and Tass, 2009) as well as the differential effects of different sorts of desynchronizing stimulation protocols (Tass and Hauptmann, 2009). These questions are highly relevant for the clinical application of desynchronizing stimulation to the therapy of diseases with abnormal synchrony, e.g., Parkinson's disease. In forthcoming studies, more microscopic models will be studied. However, apart from this type of numerical simulation analysis, we aim at establishing a thorough, analytical framework for the anti-kindling concept. As a first step into that direction, we presented here an analytical analysis of a simple network model which we validated by numerical analysis.

Different types of neurons and, hence, different target areas in the brain may be associated with different types of STDP learning rules. Accordingly, brain stimulation approaches might be optimized with respect to a particular type of STDP learning rule in order to achieve superior therapeutic effects.

This work is not the first one to treat STDP in oscillatory recurrent network. Karbowski and Ermentrout (2002) consider neurons as formal phase oscillators and therefore do not consider spiking neurons as such. Furthermore Karbowski and Ermentrout (2002) consider a learning window with identical potentiation and depression time constants which limits the potential output of the study. Maistrenko et al. (2007) describe as well neurons as phase oscillators, but they allow potentiation and depression time constants to be different. Interestingly they find multistability depending on those time constants and on the weight upper bound. Morrison et al. (2007) consider a STDP learning rule with multiplicative depression and power-law dependence for potentiation and showed numerically that this STDP rule decouples synchronous inputs from the rest of the network.

Our present model can be seen as an extension of the work of Gilson et al. (2009a,b) in several aspects. First, and most importantly, we consider here oscillatory input whereas Gilson et al. (2009b) assume stationary inputs. In this way we are able to discuss the conditions on the learning parameters to get a bistable regime with high and low synchrony. Secondly, our approach does not require to calculate the inverse Fourier transform of the spiking covariance matrix. In this way we do not need to make the (unrealistic) assumption that the EPSP time constant is much smaller than potentiation time constant in the learning window. Finally, we consider a larger class of neuronal model since we do not restrict ourself to linear neurons, but consider instead (locally linearized) non-linear neurons.

There are several ways in which we can extend the present model. It is known that propagation delays play an important role in the neuronal synchrony (Gerstner et al., 1996a; Cassenaer and Laurent, 2007; Lubenov and Siapas, 2008). Our framework can be easily extended to incorporate those propagation delays. A systematic analysis of the influence of those delays is out of the scope of the current study but would be definitely relevant. According to the arguments of Lubenov and Siapas (2008), one could expect that the bigger the synaptic delays, the stronger the effective amount of depression. This can potentially change the stability of highly synchronous states.

Another way to extend the current model is to consider learning rules that depend on high order statistics of the pre- and postsynaptic neuron. For example, it is known that a triplet learning rule which considers 1 pre- and 2 postsynaptic spikes (Pfister and Gerstner, 2006a,b; Clopath et al., 2010) is more realistic in terms a reproducing experimental data than the pair-based STDP considered here. Another extension would be to consider second order terms in the Taylor expansion of the transfer function. Both of those extensions would require a substantial amount of work since the results of Hawkes (1971) could not be used anymore.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

We thank Oleksandr Popovych for fruitful discussions. Jean-Pascal Pfister has been supported by the Wellcome Trust.

In order to express the output spiking covariance matrix Δ*Q* as a function of the input spiking covariance matrix Δ*C*, we need first to define the input–output covariance matrix Δ*P*:

$$\Delta P(\tau )=\frac{1}{T}{{\displaystyle {\int}_{0}^{T}\langle y(t){x}^{T}(t-\tau )\rangle}}_{y(t),x(t-\tau )}dt-\overline{\nu}{\overline{\rho}}^{T}$$

(30)

By rewriting the average ${\langle \cdot \rangle}_{y(t),x(t-\tau )}$ involved here and by using Eqs 1, 2, and 3, the input–output covariance matrix can be expressed in a self-consistent way as:

$$\begin{array}{l}\Delta P(\tau )=\frac{1}{T}{{\displaystyle {\int}_{0}^{T}\langle {\langle y(t)\rangle}_{y(t)|{x}^{t},{y}^{t},x(t-\tau )}{x}^{T}(t-\tau )\rangle}}_{{x}^{t},{y}^{t},x(t-\tau )}dt-\overline{\nu}{\overline{\rho}}^{T}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=\frac{1}{T}{{\displaystyle {\int}_{0}^{T}\langle \left(\overline{\nu}+{D}^{\prime}(h{x}^{\in}(t)+w{y}^{\in}(t)-\overline{u})\right){x}^{t}(t-\tau )\rangle}}_{{x}^{t},{y}^{t},x(t-\tau )}dt\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}=D\text{'}h\Delta {C}^{\in}(\tau )+D\text{'}w\Delta {P}^{\in}(\tau )\end{array}$$

(31)

where *D*′=diag(*g*′(*ū*)) is a diagonal matrix with ${{D}^{\prime}}_{ii}={g}^{\prime}({\overline{u}}_{i}).$ Here the e superscript denotes as before the convolution with (*s*), i.e., $\Delta {C}^{\mathbf{\text{\u2208}}}(\tau )={\int}_{\infty}^{\infty}\mathbf{\text{\u2208}}(s)\Delta C(\tau -s)\text{\hspace{0.17em}}ds$ and $\Delta {P}^{\mathbf{\text{\u2208}}}(s)={\int}_{\infty}^{\infty}\mathbf{\text{\u2208}}(s)\Delta P(\tau -s)\text{\hspace{0.17em}}ds.$ In order to get an explicit expression of this input–output covariance matrix, we can Fourier transform Eq. 31 and therefore turn the convolution into a product in the angular frequency domain *ω*:

$$\Delta \tilde{P}(\omega )={(\mathrm{\U0001d7d9}-\tilde{J}(\omega ))}^{-1}\tilde{K}(\omega )\Delta \tilde{C}(\omega )$$

(32)

where denotes the *N*×*N* identity matrix, $\tilde{J}(\omega )=\tilde{\mathbf{\text{\u2208}}}(\omega ){D}^{\prime}w,$ $\tilde{K}(\omega )=\tilde{\mathbf{\text{\u2208}}}(\omega ){D}^{\prime}h.$ As before the tilde symbol $(\u02dc)$ denotes the Fourier transform, i.e., $\Delta \tilde{C}(\omega )={\int}_{-\infty}^{\infty}\mathrm{exp}(-i\omega t)\Delta C(t)\text{\hspace{0.17em}}dt.$

We recall here for convenience the definition of the output covariance matrix (see Eq. 8):

$$\Delta Q(\tau )=\frac{1}{T}{{\displaystyle {\int}_{0}^{T}\langle y(t){y}^{T}(t-\tau )\rangle}}_{y(t),y(t-\tau )}dt-\overline{\nu}\text{\hspace{0.17em}}{\overline{\nu}}^{T}$$

(33)

For τ≥0, we have:

$$\begin{array}{c}\Delta Q(\tau )=\frac{1}{T}{\displaystyle {\int}_{0}^{T}{\langle {\langle y(t)\rangle}_{y(t)|{x}^{t},{y}^{t}}{y}^{T}(t-\tau )\rangle}_{{x}^{t},{y}^{t}}dt-\overline{\nu}\text{\hspace{0.17em}}{\overline{\nu}}^{T}+D\delta (\tau )}\\ =\frac{1}{T}{\displaystyle {\int}_{0}^{T}{\langle \left(\overline{\nu}+{D}^{\prime}\left(h{x}^{\mathbf{\text{\u2208}}}(t)+w{y}^{\mathbf{\text{\u2208}}}(t)-\overline{u}\right)\right){y}^{T}(t-\tau )\rangle}_{{x}^{t},{y}^{t}}dt}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}-\overline{\nu}\text{\hspace{0.17em}}{\overline{\nu}}^{T}+D\delta (\tau )\\ ={D}^{\prime}h\Delta {P}^{\mathbf{\text{\u2208}}T}(-\tau )+{D}^{\prime}w\Delta {Q}^{\mathbf{\text{\u2208}}}(\tau )+D\delta (\tau )\end{array}$$

(34)

with *D*=diag(*g*(*ū*)). Since the above identity is only valid for τ≥0, we can not simply Fourier transform this expression and extract the output covariance matrix in frequency domain. We will use here the same method as proposed by Hawkes (1971). Let *B*(τ) be a supplementary matrix defined as:

$$B(\tau )={D}^{\prime}\left(h\Delta {P}^{\mathbf{\text{\u2208}}T}(-\tau )+w\Delta {Q}^{\mathbf{\text{\u2208}}}(\tau )\right)+D\delta (\tau )-\Delta Q(\tau )$$

(35)

Because this supplementary matrix is defined ∀τ, we can Fourier transform it and express the output covariance matrix in frequency domain:

$$\Delta \tilde{Q}(\omega )={(\mathrm{\U0001d7d9}-\tilde{J}(\omega ))}^{-1}\left(\tilde{K}(\omega )\Delta {\tilde{P}}^{T}(-\omega )+D-\tilde{B}(\omega )\right)$$

(36)

From the definition of the output covariance matrix Δ*Q*(τ) in Eq. 33, we have Δ*Q*(−τ)=Δ*Q ^{T}*(τ) and hence $\Delta \tilde{Q}(-\omega )=\Delta {\tilde{Q}}^{T}(\omega ).$ By using this property and the fact that ${(\mathrm{\U0001d7d9}-\tilde{J}(-\omega ))}^{-1}\tilde{K}(-\omega )\Delta {\tilde{P}}^{T}(\omega )=\Delta \tilde{P}(-\omega ){\tilde{K}}^{T}(\omega ){(\mathrm{\U0001d7d9}-{\tilde{J}}^{T}(\omega ))}^{-1},$ we have

$${(\mathrm{\U0001d7d9}-\tilde{J}(-\omega ))}^{-1}(D-\tilde{B}(-\omega ))=\left(D-{\tilde{B}}^{T}(\omega )\right){\left(1+{\tilde{J}}^{T}(\omega )\right)}^{-1}$$

(37)

or, equivalently

$$\tilde{B}(-\omega )\left(\mathrm{\U0001d7d9}-{\tilde{J}}^{T}(\omega )\right)+D{\tilde{J}}^{T}(\omega )=(\mathrm{\U0001d7d9}-\tilde{J}(-\omega )){\tilde{B}}^{T}(\omega )+\tilde{J}(-\omega )D$$

(38)

In order to express $\tilde{B}(\omega ),$ Hawkes (1971) used a regularity argument which goes as follows. Let us assume (*t*) and |*B*(*t*)| are bounded as follows:

$$\mathbf{\text{\u2208}}(t)<{\mathbf{\text{\u2208}}}_{0}\mathrm{exp}(-\eta t)$$

$$|B(t)|\text{\hspace{0.17em}}<{B}_{0}\mathrm{exp}(\eta t)$$

(39)

We can show that both $\tilde{\mathbf{\text{\u2208}}}(\omega )$ (and hence $\tilde{J}(\omega )$) is regular for (*ω*)<η and $\tilde{B}(\omega )$ is regular for (*ω*)>−η. Let *H*(*ω*) be defined as the l.h.s. of Eq. 38 for (*ω*)<η and the r.h.s of Eq. 38 when (*ω*)>−η. In this way *H*(*ω*) is regular everywhere and since we have *H*(*ω*)→0 when |*ω*|→∞, we know from Liouville's theorem that *H*(*ω*)=0. In particular, from the r.h.s of Eq. 38, we can express the supplementary matrix as:

$$\tilde{B}(\omega )=-D{\tilde{J}}^{T}(-\omega ){(\mathrm{\U0001d7d9}-\tilde{J}(-\omega ))}^{-1}$$

(40)

If we insert this expression back into Eq. 36, we get:

$$\Delta \tilde{Q}(\omega )={(\mathrm{\U0001d7d9}-\tilde{J}(\omega ))}^{-1}\left\{\tilde{K}(\omega )\Delta \tilde{C}(\omega ){\tilde{K}}^{T}(-\omega )+D\right\}{\left(\mathrm{\U0001d7d9}-{\tilde{J}}^{T}(-\omega )\right)}^{-1}$$

(41)

^{1}Each component of *x*^{}(*t*) is convoluted with (*s*), i.e., ${x}_{k}^{\mathbf{\text{\u2208}}}(t)={\int}_{0}^{\infty}\mathbf{\text{\u2208}}(s){x}_{k}(t-s)\text{\hspace{0.17em}}ds,$ *k*=1,…,*M*.

^{2}Other types of coupled oscillators have the property that strong coupling may destroy synchronization under particular conditions, e.g., in a population of identical diffusively coupled Rössler oscillators (Heagy et al., 1995) or in a system of two coupled chaotic non-invertible maps (Maistrenko et al., 1998). However, such models do not apply to the pathophysiology considered here. Furthermore, these models do not contain STDP, and it remains to be shown whether such phenomena persist in the presence of STDP.

^{3}The last term in Eq. 15 is obtained by defining ${A}_{ij}(t)={\int}_{-\infty}^{\infty}W(s-t,\text{\hspace{0.17em}}{w}_{ij})\Delta {Q}_{ij}(s)\text{\hspace{0.17em}}ds,$ then taking the Fourier transform ${\tilde{A}}_{ij}(\omega )=\tilde{W}(-\omega ,{w}_{ij})\Delta {\tilde{Q}}_{ij}(\omega )$ and finally taking the inverse Fourier transform ${A}_{ij}(t)={(2\pi )}^{-1}{\int}_{\infty}^{\infty}\mathrm{exp}(i\omega t){\tilde{A}}_{ij}(\omega )\text{\hspace{0.17em}}d\omega $ and evaluate it at *t*=0.

^{4}Note that from the definition of the Fourier transform we have $\Re \text{\hspace{0.17em}}(\tilde{W}(-{\omega}_{0},{w}_{\text{av}}))=\Re \text{\hspace{0.17em}}(\tilde{W}({\omega}_{0},{w}_{\text{av}})).$

- Benabid A., Chabardes S., Mitrofanis J., Pollak P. (2009). Deep brain stimulation of the subthalamic nucleus for the treatment of Parkinson's disease. Lancet Neurol. 8, 67–8110.1016/S1474-4422(08)70291-6 [PubMed] [Cross Ref]
- Benabid A., Pollak P., Gervason C., Hoffmann D., Gao D., Hommel M., Perret J., De Rougemont J. (1991). Long-term suppression of tremor by chronic stimulation of the ventral intermediate thalamic nucleus. Lancet 337, 403.10.1016/0140-6736(91)91175-T [PubMed] [Cross Ref]
- Bi G., Poo M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 [PubMed]
- Cassenaer S., Laurent G. (2007). Hebbian STDP in mushroom bodies facilitates the synchronous flow of olfactory information in locusts. Nature 448, 709–71310.1038/nature05973 [PubMed] [Cross Ref]
- Clopath C., Büsing L., Vasilaki E., Gerstner W. (2010). Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat. Neurosci. 13, 344–35210.1038/nn.2479 [PubMed] [Cross Ref]
- Gerstner W., Kempter R., van Hemmen J., Wagner H. (1996a). A neuronal learning rule for sub-millisecond temporal coding. Nature 383, 76–7810.1038/383076a0 [PubMed] [Cross Ref]
- Gerstner W., Kistler W. K. (2002). Spiking Neuron Models. Cambridge UK: Cambridge University Press
- Gerstner W., van Hemmen J. L., Cowan J. D. (1996b). What matters in neuronal locking. Neural. Comput. 8, 1653–167610.1162/neco.1996.8.8.1653 [PubMed] [Cross Ref]
- Gilson M., Burkitt A., Grayden D., Thomas D., van Hemmen J. (2009a). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks iii: partially connected neurons driven by spontaneous activity. Biol. Cybern. 101, 411–42610.1007/s00422-009-0343-4 [PubMed] [Cross Ref]
- Gilson M., Burkitt A., Grayden D., Thomas D., van Hemmen J. (2009b). Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks iv: structuring synaptic pathways among recurrent connections. Biol. Cybern. 101, 427–44410.1007/s00422-009-0346-1 [PubMed] [Cross Ref]
- Gütig R., Aharonov R., Rotter S., Sompolinsky H. (2003). Learning input correlations through non-linear temporally asymmetric hebbian plasticity. J. Neurosci. 23, 3697–3714 [PubMed]
- Hauptmann C., Tass P. A. (2007). Therapeutic rewiring by means of desynchronizing brain stimulation. Biosystems 89, 173–18110.1016/j.biosystems.2006.04.015 [PubMed] [Cross Ref]
- Hauptmann C., Tass P. A. (2009). Cumulative and after-effects of short and weak coordinated reset stimulation: a modeling study. J. Neural Eng. 6, 016004.10.1088/1741-2560/6/1/016004 [PubMed] [Cross Ref]
- Hawkes A. G. (1971). Spectra of some self-exciting and mutually exciting processes. Biometrika 58, 83–9010.1093/biomet/58.1.83 [Cross Ref]
- Heagy J., Pecora L., Carroll T. (1995). Short wavelength bifurcations and size instabilities in coupled oscillator systems. Phys. Rev. Lett. 74, 4185–418810.1103/PhysRevLett.74.4185 [PubMed] [Cross Ref]
- Karbowski J., Ermentrout G. (2002). Synchrony arising from a balanced synaptic plasticity in a network of heterogeneous neural oscillators. Phys. Rev. E 65, 31902.10.1103/PhysRevE.65.031902 [PubMed] [Cross Ref]
- Kempter R., Gerstner W., van Hemmen J. L. (1999). Hebbian learning and spiking neurons. Phys. Rev. E 59, 4498–451410.1103/PhysRevE.59.4498 [Cross Ref]
- Kuhn A., Kupsch A., Schneider G., Brown P. (2006). Reduction in subthalamic 8–35Hz oscillatory activity correlates with clinical improvement in Parkinson's disease. Eur. J. Neurosci. 23, 1956.10.1111/j.1460-9568.2006.04717.x [PubMed] [Cross Ref]
- Kuramoto Y. (1984). Chemical Oscillations, Waves, and Turbulence. Berlin, Heidelberg, New York: Springer, 68–77
- Lubenov E., Siapas A. (2008). Decoupling through synchrony in neuronal circuits with propagation delays. Neuron 58, 118–13110.1016/j.neuron.2008.01.036 [PubMed] [Cross Ref]
- Maistrenko Y., Maistrenko V., Popovich A., Mosekilde E. (1998). Transverse instability and riddled basins in a system of two coupled logistic maps. Phys. Rev. E 57, 2713–272410.1103/PhysRevE.57.2713 [Cross Ref]
- Maistrenko Y. L., Lysyansky B., Hauptmann C., Burylko O., Tass P. A. (2007). Multistability in the kuramoto model with synaptic plasticity. Phys. Rev. E 75, 66207.10.1103/PhysRevE.75.066207 [PubMed] [Cross Ref]
- Markram H., Lübke J., Frotscher M., Sakmann B. (1997). Regulation of synaptic efficacy by coincidence of postysnaptic AP and EPSP. Science 275, 213–21510.1126/science.275.5297.213 [PubMed] [Cross Ref]
- Morrison A., Aertsen A., Diesmann M. (2007). Spike-timing dependent plasticity in balanced random networks. Neural. Comput. 19, 1437–146710.1162/neco.2007.19.6.1437 [PubMed] [Cross Ref]
- Morrison A., Diesmann M., Gerstner W. (2008). Phenomenological models of synaptic plasticity based on spike timing. Biol. Cybern. 98, 459–47810.1007/s00422-008-0233-1 [PMC free article] [PubMed] [Cross Ref]
- Nini A., Feingold A., Slovin H., Bergman H. (1995). Neurons in the globus pallidus do not show correlated activity in the normal monkey, but phase-locked oscillations appear in the MPTP model of Parkinsonism. J. Neurophysiol. 74, 1800. [PubMed]
- Pare D., Curro'Dossi R., Steriade M. (1990). Neuronal basis of the Parkinsonian resting tremor: a hypothesis and its implications for treatment. Neuroscience 35, 217–22610.1016/0306-4522(90)90077-H [PubMed] [Cross Ref]
- Pfister J.-P., Gerstner W. (2006a). “Beyond pair-based STDP: a phenomenological rule for spike triplet and frequency effects,” in Advances in Neural Information Processing Systems 18, eds Weiss Y., Schölkopf B., Platt J., editors. (Cambridge: MIT Press; ), 1083–1090
- Pfister J.-P., Gerstner W. (2006b). Triplets of spikes in a model of spike timing-dependent plasticity. J. Neurosci. 26, 9673–968210.1523/JNEUROSCI.1425-06.2006 [PubMed] [Cross Ref]
- Smirnov D., Barnikol U., Barnikol T., Bezruchko B. P., Hauptmann C., Bührle C., Maarouf M., Sturm V., Freund H. J., Tass P. A. (2008). The generation of Parkinsonian tremor as revealed by directional coupling analysis. Europhys. Lett. 83, 20003.10.1209/0295-5075/83/20003 [Cross Ref]
- Tass P. A. (1999). Phase Resetting in Medicine and Biology: Stochastic Modelling and Data Analysis. Berlin: Springer
- Tass P. A. (2003). A model of desynchronizing deep brain stimulation with a demand-controlled coordinated reset of neural subpopulations. Biol. Cybern. 89, 81–8810.1007/s00422-003-0425-7 [PubMed] [Cross Ref]
- Tass P. A., Hauptmann C. (2006). Long-term anti-kindling effects induced by short-term, weak desynchronizing stimulation. Nonlinear Phenomena Complex Syst. 9, 298
- Tass P. A., Hauptmann C. (2007). Therapeutic modulation of synaptic connectivity with desynchronizing brain stimulation. Int. J. Psychophysiol. 64, 53–6110.1016/j.ijpsycho.2006.07.013 [PubMed] [Cross Ref]
- Tass P. A., Hauptmann C. (2009). Anti-kindling achieved by stimulation targeting slow synaptic dynamics. Restor. Neurol. Neurosci. 27, 589–609 [PubMed]
- Tass P. A., Majtanik M. (2006). Long-term anti-kindling effects of desynchronizing brain stimulation: a theoretical study. Biol. Cybern. 94, 58–6610.1007/s00422-005-0028-6 [PubMed] [Cross Ref]
- Tass P. A., Silchenko A. N., Hauptmann C., Barnikol U. B., Speckmann E. J. (2009). Long-lasting desynchronization in rat hippocampal slice induced by coordinated reset stimulation. Phys. Rev. E 80, 11902.10.1103/PhysRevE.80.011902 [PubMed] [Cross Ref]
- Tass P. A., Smirnov D., Karavaev A., Barnikol U., Barnikol T., Adamchic I., Hauptmann C., Pawelcyzk N., Maarouf M., Sturm V., Freund H., Bezruchko B. (2010). The causal relationship between subcortical local field potential oscillations and Parkinsonian resting tremor. J. Neural Eng. 7, 1–1610.1088/1741-2560/7/1/016009 [PubMed] [Cross Ref]
- Temperli P., Ghika J., Villemure J., Burkhard P., Bogousslavsky J., Vingerhoets F. (2003). How do Parkinsonian signs return after discontinuation of subthalamic DBS? Neurology 60, 78. [PubMed]
- van Rossum M. C. W., Bi G. Q., Turrigiano G. G. (2000). Stable Hebbian learning from spike timing-dependent plasticity. J. Neurosci. 20, 8812–8821 [PubMed]

Articles from Frontiers in Computational Neuroscience are provided here courtesy of **Frontiers Media SA**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |