Spatiotemporal pattern formation in neuronal networks depends on the interplay between cellular and network synchronization properties. The neuronal phase response curve (PRC) is an experimentally obtainable measure that characterizes the cellular response to small perturbations, and can serve as an indicator of cellular propensity for synchronization. Two broad classes of PRCs have been identified for neurons: Type I, in which small excitatory perturbations induce only advances in firing, and Type II, in which small excitatory perturbations can induce both advances and delays in firing. Interestingly, neuronal PRCs are usually attenuated with increased spiking frequency, and Type II PRCs typically exhibit a greater attenuation of the phase delay region than of the phase advance region. We found that this phenomenon arises from an interplay between the time constants of active ionic currents and the interspike interval. As a result, excitatory networks consisting of neurons with Type I PRCs responded very differently to frequency modulation compared to excitatory networks composed of neurons with Type II PRCs. Specifically, increased frequency induced a sharp decrease in synchrony of networks of Type II neurons, while frequency increases only minimally affected synchrony in networks of Type I neurons. These results are demonstrated in networks in which both types of neurons were modeled generically with the Morris-Lecar model, as well as in networks consisting of Hodgkin-Huxley-based model cortical pyramidal cells in which simulated effects of acetylcholine changed PRC type. These results are robust to different network structures, synaptic strengths and modes of driving neuronal activity, and they indicate that Type I and Type II excitatory networks may display two distinct modes of processing information.
Synchronization of the firing of neurons in the brain is related to many cognitive functions, such as recognizing faces, discriminating odors, and coordinating movement. It is therefore important to understand what properties of neuronal networks promote synchrony of neural firing. One measure that is often used to determine the contribution of individual neurons to network synchrony is called the phase response curve (PRC). PRCs describe how the timing of neuronal firing changes depending on when input, such as a synaptic signal, is received by the neuron. A characteristic of PRCs that has previously not been well understood is that they change dramatically as the neuron's firing frequency is modulated. This effect carries potential significance, since cognitive functions are often associated with specific frequencies of network activity in the brain. We showed computationally that the frequency dependence of PRCs can be explained by the relative timing of ionic membrane currents with respect to the time between spike firings. Our simulations also showed that the frequency dependence of neuronal PRCs leads to frequency-dependent changes in network synchronization that can be different for different neuron types. These results further our understanding of how synchronization is generated in the brain to support various cognitive functions.
How stable synchrony in neuronal networks is sustained in the presence of conduction delays is an open question. The Dynamic Clamp was used to measure phase resetting curves (PRCs) for entorhinal cortical cells, and then to construct networks of two such neurons. PRCs were in general Type I (all advances or all delays) or weakly type II with a small region at early phases with the opposite type of resetting. We used previously developed theoretical methods based on PRCs under the assumption of pulsatile coupling to predict the delays that synchronize these hybrid circuits. For excitatory coupling, synchrony was predicted and observed only with no delay and for delays greater than half a network period that cause each neuron to receive an input late in its firing cycle and almost immediately fire an action potential. Synchronization for these long delays was surprisingly tight and robust to the noise and heterogeneity inherent in a biological system. In contrast to excitatory coupling, inhibitory coupling led to antiphase for no delay, very short delays and delays close to a network period, but to near-synchrony for a wide range of relatively short delays. PRC-based methods show that conduction delays can stabilize synchrony in several ways, including neutralizing a discontinuity introduced by strong inhibition, favoring synchrony in the case of noisy bistability, and avoiding an initial destabilizing region of a weakly type II PRC. PRCs can identify optimal conduction delays favoring synchronization at a given frequency, and also predict robustness to noise and heterogeneity.
Individual oscillators, such as pendulum-based clocks and fireflies, can spontaneously organize into a coherent, synchronized entity with a common frequency. Neurons can oscillate under some circumstances, and can synchronize their firing both within and across brain regions. Synchronized assemblies of neurons are thought to underlie cognitive functions such as recognition, recall, perception and attention. Pathological synchrony can lead to epilepsy, tremor and other dynamical diseases, and synchronization is altered in most mental disorders. Biological neurons synchronize despite conduction delays, heterogeneous circuit composition, and noise. In biological experiments, we built simple networks in which two living neurons could interact via a computer in real time. The computer precisely controlled the nature of the connectivity and the length of the communication delays. We characterized the synchronization tendencies of individual, isolated oscillators by measuring how much a single input delivered by the computer transiently shortened or lengthened the cycle period of the oscillation. We then used this information to correctly predict the strong dependence of the coordination pattern of the firing of the component neurons on the length of the communication delays. Upon this foundation, we can begin to build a theory of the basic principles of synchronization in more complex brain circuits.
A central problem in cortical processing including sensory binding and attentional gating is how neurons can synchronize their responses with zero or near-zero time lag. For a spontaneously firing neuron, an input from another neuron can delay or advance the next spike by different amounts depending upon the timing of the input relative to the previous spike. This information constitutes the phase response curve (PRC). We present a simple graphical method for determining the effect of PRC shape on synchronization tendencies and illustrate it using type 1 PRCs, which consist entirely of advances (delays) in response to excitation (inhibition). We obtained the following generic solutions for type 1 PRCs, which include the pulse-coupled leaky integrate and fire model. For pairs with mutual excitation, exact synchrony can be stable for strong coupling because of the stabilizing effect of the causal limit region of the PRC in which an input triggers a spike immediately upon arrival. However, synchrony is unstable for short delays, because delayed inputs arrive during a refractory period and cannot trigger an immediate spike. Right skew destabilizes antiphase and enables modes with time lags that grow as the conduction delay is increased. Therefore, right skew favors near synchrony at short conduction delays and a gradual transition between synchrony and antiphase for pairs coupled by mutual excitation. For pairs with mutual inhibition, zero time lag synchrony is stable for conduction delays ranging from zero to a substantial fraction of the period for pairs. However, for right skew there is a preferred antiphase mode at short delays. In contrast to mutual excitation, left skew destabilizes antiphase for mutual inhibition so that synchrony dominates at short delays as well. These pairwise synchronization tendencies constrain the synchronization properties of neurons embedded in larger networks.
synchrony; synchronization; pulsatile coupling; phase locking; phase resetting
Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
adaptive exponential integrate-and-fire neuron; mean-field; Fokker-Planck equation; synaptic kinetics; spike-triggered adaptation; firing rate
High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks.
Neurons in the brain engage in collective oscillations at different frequencies. Gamma and high-gamma oscillations (30–100 Hz and higher) have been associated with cognitive functions, and are altered in psychiatric disorders such as schizophrenia and autism. Our understanding of how high-frequency oscillations are orchestrated in the brain is still limited, but it is necessary for the development of effective clinical approaches to the treatment of these disorders. Some neuron types exhibit dynamical properties that can favour synchronization. The theory of weakly coupled oscillators showed how the phase response of individual neurons can predict the patterns of phase relationships that are observed at the network level. However, neurons in vivo do not behave like regular oscillators, but fire irregularly in a regime dominated by fluctuations. Hence, which intrinsic dynamical properties matter for synchronization, and in which regime, is still an open question. Here, we show how single-cell damped subthreshold oscillations enhance synchrony in interneuronal networks by introducing a depolarizing component, mediated by post-inhibitory rebound, that is correlated among neurons due to common inhibitory input.
Gamma rhythms (30–100 Hz) are an extensively studied synchronous brain state responsible for a number of sensory, memory, and motor processes. Experimental evidence suggests that fast-spiking interneurons are responsible for carrying the high frequency components of the rhythm, while regular-spiking pyramidal neurons fire sparsely. We propose that a combination of spike frequency adaptation and global inhibition may be responsible for this behavior. Excitatory neurons form several clusters that fire every few cycles of the fast oscillation. This is first shown in a detailed biophysical network model and then analyzed thoroughly in an idealized model. We exploit the fact that the timescale of adaptation is much slower than that of the other variables. Singular perturbation theory is used to derive an approximate periodic solution for a single spiking unit. This is then used to predict the relationship between the number of clusters arising spontaneously in the network as it relates to the adaptation time constant. We compare this to a complementary analysis that employs a weak coupling assumption to predict the first Fourier mode to destabilize from the incoherent state of an associated phase model as the external noise is reduced. Both approaches predict the same scaling of cluster number with respect to the adaptation time constant, which is corroborated in numerical simulations of the full system. Thus, we develop several testable predictions regarding the formation and characteristics of gamma rhythms with sparsely firing excitatory neurons.
Fast periodic synchronized neural spiking corresponds to a variety of functions in many different areas of the brain. Most theories and experiments suggest inhibitory neurons carry the regular rhythm while being driven by excitatory neurons that spike more sparsely in time. We suggest a simple mechanism for the low firing rate of excitatory cells – spike frequency adaptation. Combining this mechanism with strong global inhibition causes excitatory neurons to group their firing into several clusters and, thus, produce a high frequency global rhythm. We study this phenomenon in both a detailed biophysical and an idealized model that preserves these two basic mechanisms. Using analytical tools from dynamical systems theory, we examine why adaptation causes clustering. In fact, we show the number of clusters relates to a simple function of the adaptation time scale over a broad range of parameters. This allows us to develop several predictions regarding the formation of fast spiking rhythms in the brain.
Synchronization of globus pallidus (GP) neurons and cortically-entrained oscillations between GP and other basal ganglia nuclei are key features of the pathophysiology of Parkinson's disease. Phase response curves (PRCs), which tabulate the effects of phasic inputs within a neuron's spike cycle on output spike timing, are efficient tools for predicting the emergence of synchronization in neuronal networks and entrainment to periodic input. In this study we apply physiologically realistic synaptic conductance inputs to a full morphological GP neuron model to determine the phase response properties of the soma and different regions of the dendritic tree. We find that perisomatic excitatory inputs delivered throughout the inter-spike interval advance the phase of the spontaneous spike cycle yielding a type I PRC. In contrast, we demonstrate that distal dendritic excitatory inputs can either delay or advance the next spike depending on whether they occur early or late in the spike cycle. We find this latter pattern of responses, summarized by a biphasic (type II) PRC, was a consequence of dendritic activation of the small conductance calcium-activated potassium current, SK. We also evaluate the spike-frequency dependence of somatic and dendritic PRC shapes, and we demonstrate the robustness of our results to variations of conductance densities, distributions, and kinetic parameters. We conclude that the distal dendrite of GP neurons embodies a distinct dynamical subsystem that could promote synchronization of pallidal networks to excitatory inputs. These results highlight the need to consider different effects of perisomatic and dendritic inputs in the control of network behavior.
dendrite; SK current; synchronization; oscillation; basal ganglia; Parkinson's disease
Cerebellar Purkinje cells display complex intrinsic dynamics. They fire spontaneously, exhibit bistability, and via mutual network interactions are involved in the generation of high frequency oscillations and travelling waves of activity. To probe the dynamical properties of Purkinje cells we measured their phase response curves (PRCs). PRCs quantify the change in spike phase caused by a stimulus as a function of its temporal position within the interspike interval, and are widely used to predict neuronal responses to more complex stimulus patterns. Significant variability in the interspike interval during spontaneous firing can lead to PRCs with a low signal-to-noise ratio, requiring averaging over thousands of trials. We show using electrophysiological experiments and simulations that the PRC calculated in the traditional way by sampling the interspike interval with brief current pulses is biased. We introduce a corrected approach for calculating PRCs which eliminates this bias. Using our new approach, we show that Purkinje cell PRCs change qualitatively depending on the firing frequency of the cell. At high firing rates, Purkinje cells exhibit single-peaked, or monophasic PRCs. Surprisingly, at low firing rates, Purkinje cell PRCs are largely independent of phase, resembling PRCs of ideal non-leaky integrate-and-fire neurons. These results indicate that Purkinje cells can act as perfect integrators at low firing rates, and that the integration mode of Purkinje cells depends on their firing rate.
By observing how brief current pulses injected at different times between spikes change the phase of spiking of a neuron (and thus obtaining the so-called phase response curve), it should be possible to predict a full spike train in response to more complex stimulation patterns. When we applied this traditional protocol to obtain phase response curves in cerebellar Purkinje cells in the presence of noise, we observed a triangular region devoid of data points near the end of the spiking cycle. This “Bermuda Triangle” revealed a flaw in the classical method for constructing phase response curves. We developed a new approach to eliminate this flaw and used it to construct phase response curves of Purkinje cells over a range of spiking rates. Surprisingly, at low firing rates, phase changes were independent of the phase of the injected current pulses, implying that the Purkinje cell is a perfect integrator under these conditions. This mechanism has not yet been described in other cell types and may be crucial for the information processing capabilities of these neurons.
In order to properly capture spike-frequency adaptation with a simplified point-neuron model, we study approximations of Hodgkin-Huxley (HH) models including slow currents by exponential integrate-and-fire (EIF) models that incorporate the same types of currents. We optimize the parameters of the EIF models under the external drive consisting of AMPA-type conductance pulses using the current-voltage curves and the van Rossum metric to best capture the subthreshold membrane potential, firing rate, and jump size of the slow current at the neuron’s spike times. Our numerical simulations demonstrate that, in addition to these quantities, the approximate EIF-type models faithfully reproduce bifurcation properties of the HH neurons with slow currents, which include spike-frequency adaptation, phase-response curves, critical exponents at the transition between a finite and infinite number of spikes with increasing constant external drive, and bifurcation diagrams of interspike intervals in time-periodically forced models. Dynamics of networks of HH neurons with slow currents can also be approximated by corresponding EIF-type networks, with the approximation being at least statistically accurate over a broad range of Poisson rates of the external drive. For the form of external drive resembling realistic, AMPA-like synaptic conductance response to incoming action potentials, the EIF model affords great savings of computation time as compared with the corresponding HH-type model. Our work shows that the EIF model with additional slow currents is well suited for use in large-scale, point-neuron models in which spike-frequency adaptation is important.
Adaptation current; Integrate-and-fire networks; Bifurcations; Numerical methods; Efficient neuronal models
Our goal is to understand how nearly synchronous modes arise in heterogenous networks of neurons. In heterogenous networks, instead of exact synchrony, nearly synchronous modes arise, which include both 1:1 and 2:2 phase-locked modes. Existence and stability criteria for 2:2 phase-locked modes in reciprocally coupled two neuron circuits were derived based on the open loop phase resetting curve (PRC) without the assumption of weak coupling. The PRC for each component neuron was generated using the change in synaptic conductance produced by a presynaptic action potential as the perturbation. Separate derivations were required for modes in which the firing order is preserved and for those in which it alternates. Networks composed of two model neurons coupled by reciprocal inhibition were examined to test the predictions. The parameter regimes in which both types of nearly synchronous modes are exhibited were accurately predicted both qualitatively and quantitatively provided that the synaptic time constant is short with respect to the period and that the effect of second order resetting is considered. In contrast, PRC methods based on weak coupling could not predict 2:2 modes and did not predict the 1:1 modes with the level of accuracy achieved by the strong coupling methods. The strong coupling prediction methods provide insight into what manipulations promote near-synchrony in a two neuron network and may also have predictive value for larger networks, which can also manifest changes in firing order. We also identify a novel route by which synchrony is lost in mildly heterogenous networks.
Synchrony; phase response curve; network oscillation
The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as both temporally and spatially localized. Under this localist account, neurons compute near-instantaneous mappings from their current input to their current output, brought about by somatic summation of dendritic contributions that are generated in functionally segregated compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought; notably that local dendritic activity may be a mechanism for generating on-going whole-cell voltage oscillations.
A central issue in biology is how local processes yield global consequences. This is especially relevant for neurons since these spatially extended cells process local synaptic inputs to generate global action potential output. The dendritic tree of a neuron, which receives most of the inputs, expresses ion channels that can generate nonlinear dynamics. A prominent phenomenon resulting from such ion channels are voltage oscillations. The distribution of the active membrane channels throughout the cell is often highly non-uniform. This can turn the dendritic tree into a network of sparsely spaced local oscillators. Here we analyze whether local dendritic oscillators can produce cell-wide voltage oscillations. Our mathematical theory shows that indeed even when the dendritic oscillators are weakly coupled, they lock their phases and give global oscillations. We show how the biophysical properties of the dendrites determine the global locking and how it can be controlled by synaptic inputs. As a consequence of global locking, even individual synaptic inputs can affect the timing of action potentials. In fact, dendrites locking in synchrony can lead to sustained firing of the cell. We show that dendritic trees can be bistable, with dendrites locking in either synchrony or asynchrony, which may provide a novel mechanism for single cell-based memory.
The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks.
Whether spike timing conveys information in cortical networks or whether the firing rate alone is sufficient is a matter of controversial debate, touching the fundamental question of how the brain processes, stores, and conveys information. If the firing rate alone is the decisive signal used in the brain, correlations between action potentials are just an epiphenomenon of cortical connectivity, where pairs of neurons share a considerable fraction of common afferents. Due to membrane leakage, small synaptic amplitudes and the non-linear threshold, nerve cells exhibit lossy transmission of correlation originating from shared synaptic inputs. However, the membrane potential of cortical neurons often displays non-Gaussian fluctuations, caused by synchronized synaptic inputs. Moreover, synchronously active neurons have been found to reflect behavior in primates. In this work we therefore contrast the transmission of correlation due to shared afferents and due to synchronously arriving synaptic impulses for leaky neuron models. We not only find that neurons are highly sensitive to synchronous afferents, but that they can suppress noise on signals transmitted by synchrony, a computational advantage over rate signals.
Networks of model neurons were constructed and their activity was predicted using an iterated map based solely on the phase resetting curves (PRCs). The predictions were quite accurate provided that the resetting to simultaneous inputs was calculated using the sum of the simultaneously active conductances, obviating the need for weak coupling assumptions. Fully synchronous activity was observed only when the slope of the PRC at a phase of zero, corresponding to spike initiation, was positive. A novel stability criterion was developed and tested for all to all networks of identical, identically connected neurons. When the PRC generated using N-1 simultaneously active inputs becomes too steep, the fully synchronous mode loses stability in a network of N model neurons. Therefore, the stability of synchrony can be lost by increasing the slope of this PRC either by increasing the network size or the strength of the individual synapses. Existence and stability criteria were also developed and tested for the splay mode in which neurons fire sequentially. Finally, N/M synchronous sub-clusters of M neurons were predicted using the intersection of parameters that supported both between cluster splay and within cluster synchrony. Surprisingly, the splay mode between clusters could enforce synchrony on sub-clusters that were incapable of synchronizing themselves. These results can be used to gain insights into the activity of networks of biological neurons whose PRCs can be measured.
Network; Synchronization; Oscillator; Rhythm; Phase shift; Synchrony
Gamma oscillations can synchronize with near zero phase lag over multiple cortical regions and between hemispheres, and between two distal sites in hippocampal slices. How synchronization can take place over long distances in a stable manner is considered an open question. The phase resetting curve (PRC) keeps track of how much an input advances or delays the next spike, depending upon where in the cycle it is received. We use PRCs under the assumption of pulsatile coupling to derive existence and stability criteria for 1:1 phase-locking that arises via bidirectional pulse coupling of two limit cycle oscillators with a conduction delay of any duration for any 1:1 firing pattern. The coupling can be strong as long as the effect of one input dissipates before the next input is received. We show the form that the generic synchronous and anti-phase solutions take in a system of two identical, identically pulse-coupled oscillators with identical delays. The stability criterion has a simple form that depends only on the slopes of the PRCs at the phases at which inputs are received and on the number of cycles required to complete the delayed feedback loop. The number of cycles required to complete the delayed feedback loop depends upon both the value of the delay and the firing pattern. We successfully tested the predictions of our methods on networks of model neurons. The criteria can easily be extended to include the effect of an input on the cycle after the one in which it is received.
We review the principal assumptions underlying the application of phase-response curves (PRCs) to synchronization in neuronal networks. The PRC measures how much a given synaptic input perturbs spike timing in a neural oscillator. Among other applications, PRCs make explicit predictions about whether a given network of interconnected neurons will synchronize, as is often observed in cortical structures. Regarding the assumptions of the PRC theory, we conclude: (i) The assumption of noise-tolerant cellular oscillations at or near the network frequency holds in some but not all cases. (ii) Reduced models for PRC-based analysis can be formally related to more realistic models. (iii) Spike-rate adaptation limits PRC-based analysis but does not invalidate it. (iv) The dependence of PRCs on synaptic location emphasizes the importance of improving methods of synaptic stimulation. (v) New methods can distinguish between oscillations that derive from mutual connections and those arising from common drive. (vi) It is helpful to assume linear summation of effects of synaptic inputs; experiments with trains of inputs call this assumption into question. (vii) Relatively subtle changes in network structure can invalidate PRC-based predictions. (viii) Heterogeneity in the preferred frequencies of component neurons does not invalidate PRC analysis, but can annihilate synchronous activity.
neural network; phase-response curve; computational neuroscience
Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network-based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance-based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network-based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
spike frequency adaptation; adaptation; oscillations; rate models; network dynamics; Fokker–Planck; mean-field; recurrent network
We investigate why electrically coupled neuronal oscillators synchronize or fail to synchronize using the theory of weakly coupled oscillators. Stability of synchrony and antisynchrony is predicted analytically and verified using numerical bifurcation diagrams. The shape of the phase response curve (PRC), the shape of the voltage time course, and the frequency of spiking are freely varied to map out regions of parameter spaces that hold stable solutions. We find that type-1 and type-2 PRCs can both hold synchronous and antisynchronous solutions, but the shape of the PRC and the voltage determine the extent of their stability. This is achieved by introducing a five-piecewise linear model to the PRC, and a three-piecewise linear model to the voltage time course, and then analyzing the resultant eigenvalue equations that determine the stability of the phase-locked solutions. A single time parameter defines the skewness of the PRC, and another single time parameter defines the spike width and frequency. Our approach gives a comprehensive picture of the relation between the PRC shape, voltage time course and the stability of the resultant synchronous and antisynchronous solutions.
Phase response curves (PRCs) have been widely used to study synchronization in neural circuits comprised of pacemaking neurons. They describe how the timing of the next spike in a given spontaneously firing neuron is affected by the phase at which an input from another neuron is received. Here we study two reciprocally coupled clusters of pulse coupled oscillatory neurons. The neurons within each cluster are presumed to be identical and identically pulse coupled, but not necessarily identical to those in the other cluster. We investigate a two cluster solution in which all oscillators are synchronized within each cluster, but in which the two clusters are phase locked at nonzero phase with each other. Intuitively, one might expect this solution to be stable only when synchrony within each isolated cluster is stable, but this is not the case. We prove rigorously the stability of the two cluster solution and show how reciprocal coupling can stabilize synchrony within clusters that cannot synchronize in isolation. These stability results for the two cluster solution suggest a mechanism by which reciprocal coupling between brain regions can induce local synchronization via the network feedback loop.
neuronal networks; synchronization; clustering; phase response curves; pulse coupled oscillators
The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections (“communication through resonance”). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.
The cortex is a highly modular structure with a large number of functionally specialized areas that communicate with each other through long-range cortical connections. It is has been suggested that communication between spiking neuronal networks (SNNs) requires synchronization of spiking activity which is either provided by the flow of neuronal activity across divergent/convergent connections, as suggested by computational models of SNNs, or by local oscillations in the gamma frequency band (30–100 Hz). However, such communication requires unphysiologically dense/strong connectivity, and the mechanisms required to synchronize separated local oscillators remain poorly understood. Here, we present a novel mechanism that alleviates these shortcomings and enables the propagation synchrony across weakly connected SNNs by locally amplifying feeble synchronization through resonance that naturally occurs in oscillating networks of excitatory and inhibitory neurons. We show that oscillatory stimuli at the network resonance frequencies generate a slowly propagating oscillation that is synchronized across the distributed networks. Moreover, communication with such oscillations depends on the dynamical state of the background activity in the SNN. Our results suggest that the emergence of synchronized oscillations can be viewed as a consequence of spiking activity propagation in weakly connected networks that is supported by resonance and modulated by the dynamics of the ongoing activity.
Synchronization between neuronal populations plays an important role in information transmission between brain areas. In particular, collective oscillations emerging from the synchronized activity of thousands of neurons can increase the functional connectivity between neural assemblies by coherently coordinating their phases. This synchrony of neuronal activity can take place within a cortical patch or between different cortical regions. While short-range interactions between neurons involve just a few milliseconds, communication through long-range projections between different regions could take up to tens of milliseconds. How these heterogeneous transmission delays affect communication between neuronal populations is not well known. To address this question, we have studied the dynamics of two bidirectionally delayed-coupled neuronal populations using conductance-based spiking models, examining how different synaptic delays give rise to in-phase/anti-phase transitions at particular frequencies within the gamma range, and how this behavior is related to the phase coherence between the two populations at different frequencies. We have used spectral analysis and information theory to quantify the information exchanged between the two networks. For different transmission delays between the two coupled populations, we analyze how the local field potential and multi-unit activity calculated from one population convey information in response to a set of external inputs applied to the other population. The results confirm that zero-lag synchronization maximizes information transmission, although out-of-phase synchronization allows for efficient communication provided the coupling delay, the phase lag between the populations, and the frequency of the oscillations are properly matched.
The correct operation of the brain requires a carefully orchestrated activity, which includes the establishment of synchronized behavior among multiple neuronal populations. Synchronization of collective neuronal oscillations, in particular, has been suggested to mediate communication between brain areas, with the global oscillations acting as “information carriers” on which signals encoding specific stimuli or brain states are superimposed. But neuronal signals travel at finite speeds across the brain, thus leading to a wide range of delays in the coupling between neuronal populations. How the brain reaches the required level of coordination in the presence of such delays is still unclear. Here we approach this question in the case of two delay-coupled neuronal populations exhibiting collective oscillations in the gamma range. Our results show that effective communication can be reached even in the presence of relatively large delays between the populations, which self-organize in either in-phase or anti-phase synchronized states. In those states the transmission delays, phase difference, and oscillation frequency match to allow for communication at a wide range of coupling delays between brain areas.
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Deciphering the encoding of information in the brain implies understanding how individual neurons emit action potentials (APs) in response to time-varying stimuli. This task is made difficult by two facts: (i) although the biophysics of AP generation are well understood, the dynamics of the membrane potential in response to a time-varying input are highly complex; (ii) the firing of APs in response to a given stimulus is inherently stochastic as only a fraction of the inputs to a neuron are directly controlled by the stimulus, the remaining being due to the fluctuating activity of the surrounding network. As a result, the input-output transform of individual neurons is often represented with the help of simplified phenomenological models that do not take into account the biophysical details. In this study, we directly relate a class of such phenomenological models, the so called linear-nonlinear models, with more biophysically detailed spiking neuron models. We provide a quantitative mapping between the two classes of models, and show that the linear-nonlinear models provide a good approximation of the input-output transform of spiking neurons, as long as the fluctuating inputs from the surrounding network are not exceedingly weak.
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.
Neuronal responses are often characterized by the rate at which action potentials are generated rather than by the timing of individual spikes. Firing-rate descriptions of neural activity are appealing because of their comparative simplicity, but it is important to develop models that faithfully approximate dynamic features arising from spiking. In particular, synchronization or partial synchronization of spikes is an important feature that cannot be described by typical firing-rate models. Here we develop a model that is nearly as simple as the simplest firing-rate models and yet can account for a number of aspects of spiking dynamics, including partial synchrony. The model matches the dynamic activity of networks of spiking neurons with surprising accuracy. By expanding the range of dynamic phenomena that can be described by simple firing-rate equations, this model should be useful in guiding intuition about and understanding of neural circuit function.
Somatostatin-expressing, low threshold-spiking (LTS) cells and fast-spiking (FS) cells are two common subtypes of inhibitory neocortical interneuron. Excitatory synapses from regular-spiking (RS) pyramidal neurons to LTS cells strongly facilitate when activated repetitively, whereas RS-to-FS synapses depress. This suggests that LTS neurons may be especially relevant at high rate regimes and protect cortical circuits against over-excitation and seizures. However, the inhibitory synapses from LTS cells usually depress, which may reduce their effectiveness at high rates. We ask: by which mechanisms and at what firing rates do LTS neurons control the activity of cortical circuits responding to thalamic input, and how is control by LTS neurons different from that of FS neurons? We study rate models of circuits that include RS cells and LTS and FS inhibitory cells with short-term synaptic plasticity. LTS neurons shift the RS firing-rate vs. current curve to the right at high rates and reduce its slope at low rates; the LTS effect is delayed and prolonged. FS neurons always shift the curve to the right and affect RS firing transiently. In an RS-LTS-FS network, FS neurons reach a quiescent state if they receive weak input, LTS neurons are quiescent if RS neurons receive weak input, and both FS and RS populations are active if they both receive large inputs. In general, FS neurons tend to follow the spiking of RS neurons much more closely than LTS neurons. A novel type of facilitation-induced slow oscillations is observed above the LTS firing threshold with a frequency determined by the time scale of recovery from facilitation. To conclude, contrary to earlier proposals, LTS neurons affect the transient and steady state responses of cortical circuits over a range of firing rates, not only during the high rate regime; LTS neurons protect against over-activation about as well as FS neurons.
The brain consists of circuits of neurons that signal to one another via synapses. There are two classes of neurons: excitatory cells, which cause other neurons to become more active, and inhibitory neurons, which cause other neurons to become less active. It is thought that the activity of excitatory neurons is kept in check largely by inhibitory neurons; when such an inhibitory “brake” fails, a seizure can result. Inhibitory neurons of the low-threshold spiking (LTS) subtype can potentially fulfill this braking, or anticonvulsant, role because the synaptic input to these neurons facilitates, i.e., those neurons are active when excitatory neurons are strongly active. Using a computational model we show that, because the synaptic output of LTS neurons onto excitatory neurons depresses (decreases with activity), the ability of LTS neurons to prevent strong cortical activity and seizures is not qualitatively larger than that of inhibitory neurons of another subtype, the fast-spiking (FS) cells. Furthermore, short-term (∼one second) changes in the strength of synapses to and from LTS interneurons allow them to shape the behavior of cortical circuits even at modest rates of activity, and an RS-LTS-FS circuit is capable of producing slow oscillations, on the time scale of these short-term changes.
We used phase resetting methods to predict firing patterns of rat subthalamic nucleus (STN) neurons when their rhythmic firing was densely perturbed by noise. We applied sequences of contiguous brief (0.5–2 ms) current pulses with amplitudes drawn from a Gaussian distribution (10–100 pA standard deviation) to autonomously firing STN neurons in slices. Current noise sequences increased the variability of spike times with little or no effect on the average firing rate. We measured the infinitesimal phase resetting curve (PRC) for each neuron using a noise-based method. A phase model consisting of only a firing rate and PRC was very accurate at predicting spike timing, accounting for more than 80% of spike time variance and reliably reproducing the spike-to-spike pattern of irregular firing. An approximation for the evolution of phase was used to predict the effect of firing rate and noise parameters on spike timing variability. It quantitatively predicted changes in variability of interspike intervals with variation in noise amplitude, pulse duration and firing rate over the normal range of STN spontaneous rates. When constant current was used to drive the cells to higher rates, the PRC was altered in size and shape and accurate predictions of the effects of noise relied on incorporating these changes into the prediction. Application of rate-neutral changes in conductance showed that changes in PRC shape arise from conductance changes known to accompany rate increases in STN neurons, rather than the rate increases themselves. Our results show that firing patterns of densely perturbed oscillators cannot readily be distinguished from those of neurons randomly excited to fire from the rest state. The spike timing of repetitively firing neurons may be quantitatively predicted from the input and their PRCs, even when they are so densely perturbed that they no longer fire rhythmically.
Most neurons receive thousands of synaptic inputs per second. Each of these may be individually weak but collectively they shape the temporal pattern of firing by the postsynaptic neuron. If the postsynaptic neuron fires repetitively, its synaptic inputs need not directly trigger action potentials, but may instead control the timing of action potentials that would occur anyway. The phase resetting curve encapsulates the influence of an input on the timing of the next action potential, depending on its time of arrival. We measured the phase resetting curves of neurons in the subthalamic nucleus and used them to accurately predict the timing of action potentials in a phase model subjected to complex input patterns. A simple approximation to the phase model accurately predicted the changes in firing pattern evoked by dense patterns of noise pulses varying in amplitude and pulse duration, and by changes in firing rate. We also showed that the phase resetting curve changes systematically with changes in total neuron conductance, and doing so predicts corresponding changes in firing pattern. Our results indicate that the phase model may accurately represent the temporal integration of complex patterns of input to repetitively firing neurons.
It has been proposed that synchronized neural assemblies in the antennal lobe of insects encode the identity of olfactory stimuli. In response to an odor, some projection neurons exhibit synchronous firing, phase-locked to the oscillations of the field potential, whereas others do not. Experimental data indicate that neural synchronization and field oscillations are induced by fast GABAA-type inhibition, but it remains unclear how desynchronization occurs. We hypothesize that slow inhibition plays a key role in desynchronizing projection neurons. Because synaptic noise is believed to be the dominant factor that limits neuronal reliability, we consider a computational model of the antennal lobe in which a population of oscillatory neurons interact through unreliable GABAA and GABAB inhibitory synapses. From theoretical analysis and extensive computer simulations, we show that transmission failures at slow GABAB synapses make the neural response unpredictable. Depending on the balance between GABAA and GABAB inputs, particular neurons may either synchronize or desynchronize. These findings suggest a wiring scheme that triggers stimulus-specific synchronized assemblies. Inhibitory connections are set by Hebbian learning and selectively activated by stimulus patterns to form a spiking associative memory whose storage capacity is comparable to that of classical binary-coded models. We conclude that fast inhibition acts in concert with slow inhibition to reformat the glomerular input into odor-specific synchronized neural assemblies.
A fundamental question in computational neuroscience is to understand how interactions between neurons underlie sensory coding and information storage. In the first relay of the insect olfactory system, odorant stimuli trigger synchronized activities in neuron populations. Synchronized assemblies may arise as a consequence of inhibitory coupling, because they are disrupted when inhibition is pharmacologically blocked. Using computational modelling, we studied the role of inhibitory, noisy interactions in producing stimulus-specific synchrony. So far, experimental data and modelling studies indicate that fast inhibition induces neural synchrony, but it remains unclear how desynchronization occurs. From theoretical analysis and computer simulations, we found that slow inhibition plays a key role in desynchronizing neurons. Depending on the balance between fast and slow inhibitory inputs, particular neurons may either synchronize or desynchronize. The complementary roles of the two synaptic time scales in the formation of neural assemblies suggest a wiring scheme that produces stimulus-specific inhibitory interactions and endows inhibitory sub-circuits with properties of binary memories.