Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
decoding; spiking neurons; Bayesian inference; population coding; leaky integrate and fire
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a ‘quasi-renewal equation’ which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.
How can information be encoded and decoded in populations of adapting neurons? A quantitative answer to this question requires a mathematical expression relating neuronal activity to the external stimulus, and, conversely, stimulus to neuronal activity. Although widely used equations and models exist for the special problem of relating external stimulus to the action potentials of a single neuron, the analogous problem of relating the external stimulus to the activity of a population has proven more difficult. There is a bothersome gap between the dynamics of single adapting neurons and the dynamics of populations. Moreover, if we ignore the single neurons and describe directly the population dynamics, we are faced with the ambiguity of the adapting neural code. The neural code of adapting populations is ambiguous because it is possible to observe a range of population activities in response to a given instantaneous input. Somehow the ambiguity is resolved by the knowledge of the population history, but how precisely? In this article we use approximation methods to provide mathematical expressions that describe the encoding and decoding of external stimuli in adapting populations. The theory presented here helps to bridge the gap between the dynamics of single neurons and that of populations.
In simulating realistic neuronal circuitry composed of diverse types of neurons, we need an elemental spiking neuron model that is capable of not only quantitatively reproducing spike times of biological neurons given in vivo-like fluctuating inputs, but also qualitatively representing a variety of firing responses to transient current inputs. Simplistic models based on leaky integrate-and-fire mechanisms have demonstrated the ability to adapt to biological neurons. In particular, the multi-timescale adaptive threshold (MAT) model reproduces and predicts precise spike times of regular-spiking, intrinsic-bursting, and fast-spiking neurons, under any fluctuating current; however, this model is incapable of reproducing such specific firing responses as inhibitory rebound spiking and resonate spiking. In this paper, we augment the MAT model by adding a voltage dependency term to the adaptive threshold so that the model can exhibit the full variety of firing responses to various transient current pulses while maintaining the high adaptability inherent in the original MAT model. Furthermore, with this addition, our model is actually able to better predict spike times. Despite the augmentation, the model has only four free parameters and is implementable in an efficient algorithm for large-scale simulation due to its linearity, serving as an element neuron model in the simulation of realistic neuronal circuitry.
spiking neuron model; predicting spike times; reproducing firing patterns; leaky integrate-and-fire model; adaptive threshold; MAT model; voltage dependency; threshold variability
A sensory stimulus evokes activity in many neurons, creating a population response that must be “decoded” by the brain to estimate the parameters of that stimulus. Most decoding models have suggested complex neural circuits that compute optimal estimates of sensory parameters on the basis of responses in many sensory neurons. We propose a slightly suboptimal but practically simpler decoder. Decoding neurons integrate their inputs across 100 ms; incoming spikes are weighted by the preferred stimulus of the neuron of origin; and a local, cellular non-linearity approximates divisive normalization without dividing explicitly. The suboptimal decoder includes two simplifying approximations. It uses estimates of firing rate across the population rather than computing the total population response, and it implements divisive normalization with local cellular mechanisms of single neurons rather than more complicated neural circuit mechanisms. When applied to the practical problem of estimating target speed from a realistic simulation of the population response in extrastriate visual area MT, the suboptimal decoder has almost the same accuracy and precision as traditional decoding models. It succeeds in predicting the precision and imprecision of motor behavior using a suboptimal decoding computation because it adds only a small amount of imprecision to the code for target speed in MT, which is itself imprecise.
population decoding; divisive normalization; spike timing; MT; vector averaging
Use of spike timing to encode information requires that neurons respond with high temporal precision and with high reliability. Fast fluctuating stimuli are known to result in highly reproducible spike times across trials, whereas constant stimuli result in variable spike times. Here, we have investigated how spike-time reliability depends on the time scale of fluctuations of the input stimuli in real neurons (mitral cells in the olfactory bulb and pyramidal cells in the neocortex) as well as in neuron models (integrate-and-fire and Hodgkin-Huxley) with intrinsic noise. In all cases we found that for firing frequencies in the beta/gamma range, spike reliability is maximal when the input includes fluctuations on the time scale of a few milliseconds (2-5 ms), coinciding with the time scale of fast synapses, and decreases substantially for faster and slower inputs. In addition, we show mathematically that the existence of an optimal time scale for spike-time reliability is a general feature of neurons. Finally, we comment how these findings relate to the mechanisms that cause neuronal synchronization.
Firing-rate models provide a practical tool for studying the dynamics of trial- or population-averaged neuronal signals. A wealth of theoretical and experimental studies has been dedicated to the derivation or extraction of such models by investigating the firing-rate response characteristics of ensembles of neurons. The majority of these studies assumes that neurons receive input spikes at a high rate through weak synapses (diffusion approximation). For many biological neural systems, however, this assumption cannot be justified. So far, it is unclear how time-varying presynaptic firing rates are transmitted by a population of neurons if the diffusion assumption is dropped. Here, we numerically investigate the stationary and non-stationary firing-rate response properties of leaky integrate-and-fire neurons receiving input spikes through excitatory synapses with alpha-function shaped postsynaptic currents for strong synaptic weights. Input spike trains are modeled by inhomogeneous Poisson point processes with sinusoidal rate. Average rates, modulation amplitudes, and phases of the period-averaged spike responses are measured for a broad range of stimulus, synapse, and neuron parameters. Across wide parameter regions, the resulting transfer functions can be approximated by a linear first-order low-pass filter. Below a critical synaptic weight, the cutoff frequencies are approximately constant and determined by the synaptic time constants. Only for synapses with unrealistically strong weights are the cutoff frequencies significantly increased. To account for stimuli with larger modulation depths, we combine the measured linear transfer function with the nonlinear response characteristics obtained for stationary inputs. The resulting linear–nonlinear model accurately predicts the population response for a variety of non-sinusoidal stimuli.
leaky integrate-and-fire neuron; spiking neuron model; firing-rate model; linear response; transfer function; diffusion limit; finite synaptic weights; linear–nonlinear model
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Computational analyses have revealed that precisely timed spikes emitted by somatosensory cortical neuronal populations encode basic stimulus features in the rat's whisker sensory system. Efficient spike time based decoding schemes both for the spatial location of a stimulus and for the kinetic features of complex whisker movements have been defined. To date, these decoding schemes have been based upon spike times referenced to an external temporal frame – the time of the stimulus itself. Such schemes are limited by the requirement of precise knowledge of the stimulus time signal, and it is not clear whether stimulus times are known to rats making sensory judgments. Here, we first review studies of the information obtained from spike timing referenced to the stimulus time. Then we explore new methods for extracting spike train information independently of any external temporal reference frame. These proposed methods are based on the detection of stimulus-dependent differences in the firing time within a neuronal population. We apply them to a data set using single-whisker stimulation in anesthetized rats and find that stimulus site can be decoded based on the millisecond-range relative differences in spike times even without knowledge of stimulus time. If spike counts alone are measured over tens or hundreds of milliseconds rather than milliseconds, such decoders are much less effective. These results suggest that decoding schemes based on millisecond-precise spike times are likely to subserve robust and information-rich transmission of information in the somatosensory system.
information theory; somatosensation; neural coding; decoding; spike patterns; population coding
Changes of synaptic connections between neurons are thought to be the physiological basis of learning. These changes can be gated by neuromodulators that encode the presence of reward. We study a family of reward-modulated synaptic learning rules for spiking neurons on a learning task in continuous space inspired by the Morris Water maze. The synaptic update rule modifies the release probability of synaptic transmission and depends on the timing of presynaptic spike arrival, postsynaptic action potentials, as well as the membrane potential of the postsynaptic neuron. The family of learning rules includes an optimal rule derived from policy gradient methods as well as reward modulated Hebbian learning. The synaptic update rule is implemented in a population of spiking neurons using a network architecture that combines feedforward input with lateral connections. Actions are represented by a population of hypothetical action cells with strong mexican-hat connectivity and are read out at theta frequency. We show that in this architecture, a standard policy gradient rule fails to solve the Morris watermaze task, whereas a variant with a Hebbian bias can learn the task within 20 trials, consistent with experiments. This result does not depend on implementation details such as the size of the neuronal populations. Our theoretical approach shows how learning new behaviors can be linked to reward-modulated plasticity at the level of single synapses and makes predictions about the voltage and spike-timing dependence of synaptic plasticity and the influence of neuromodulators such as dopamine. It is an important step towards connecting formal theories of reinforcement learning with neuronal and synaptic properties.
Humans and animals learn if they receive reward. Such reward is likely to be communicated throughout the brain by neuromodulatory signals. In this paper we present a network of model neurons, which communicate by short electrical pulses (spikes). Learning is achieved by modifying the input connections depending on the signals they emit and receive, if a sequence of action is followed by reward. With such a learning rule, a simulated animal learns to find (starting from arbitrary initial conditions) a target location where reward has occurred in the past.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
Experimental studies have observed Long Term synaptic Potentiation (LTP) when a presynaptic neuron fires shortly before a postsynaptic neuron, and Long Term Depression (LTD) when the presynaptic neuron fires shortly after, a phenomenon known as Spike Timing Dependant Plasticity (STDP). When a neuron is presented successively with discrete volleys of input spikes STDP has been shown to learn ‘early spike patterns’, that is to concentrate synaptic weights on afferents that consistently fire early, with the result that the postsynaptic spike latency decreases, until it reaches a minimal and stable value. Here, we show that these results still stand in a continuous regime where afferents fire continuously with a constant population rate. As such, STDP is able to solve a very difficult computational problem: to localize a repeating spatio-temporal spike pattern embedded in equally dense ‘distractor’ spike trains. STDP thus enables some form of temporal coding, even in the absence of an explicit time reference. Given that the mechanism exposed here is simple and cheap it is hard to believe that the brain did not evolve to use it.
Temporal filtering is a fundamental operation of nervous systems. In peripheral sensory systems, the temporal pattern of spiking activity can encode various stimulus qualities, and temporal filtering allows postsynaptic neurons to detect behaviorally-relevant stimulus features from these spike trains. Intrinsic excitability, short-term synaptic plasticity, and voltage-dependent dendritic conductances have all been identified as mechanisms that can establish temporal filtering behavior in single neurons. Here we show that synaptic integration of temporally-summating excitation and inhibition can establish diverse temporal filters of presynaptic input. Mormyrid electric fish communicate by varying the intervals between electric organ discharges. The timing of each discharge is coded by peripheral receptors into precisely-timed spikes. Within the midbrain posterior exterolateral nucleus, temporal filtering by individual neurons results in selective responses to a particular range of presynaptic interspike intervals. These neurons are diverse in their temporal filtering properties, reflecting the wide range of intervals that must be detected during natural communication behavior. By manipulating presynaptic spike timing with high temporal resolution, we demonstrate that tuning to behaviorally-relevant patterns of presynaptic input is similar in vivo and in vitro. We reveal that GABAergic inhibition plays a critical role in establishing different temporal filtering properties. Further, our results demonstrate that temporal summation of excitation and inhibition establishes selective responses to high and low rates of synaptic input, respectively. Simple models of synaptic integration reveal that variation in these two competing influences provides a basic mechanism for generating diverse temporal filters of synaptic input.
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons.
With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit.
We consider the constrained optimization of excitatory synaptic input patterns to maximize spike generation in leaky integrate-and-fire (LIF) and theta model neurons. In the case of discrete input kicks with a fixed total magnitude, optimal input timings and strengths are identified for each model using phase plane arguments. In both cases, optimal features relate to finding an input level at which the drop in input between successive spikes is minimized. A bounded minimizing level always exists in the theta model and may or may not exist in the LIF model, depending on parameter tuning. We also provide analytical formulas to estimate the number of spikes resulting from a given input train. In a second case of continuous inputs of fixed total magnitude, we analyze the tuning of an input shape parameter to maximize the number of spikes occurring in a fixed time interval. Results are obtained using numerical solution of a variational boundary value problem that we derive, as well as analysis, for the theta model and using a combination of simulation and analysis for the LIF model. In particular, consistent with the discrete case, the number of spikes in the theta model rises and then falls again as the input becomes more tightly peaked. Under a similar variation in the LIF case, we numerically show that the number of spikes increases monotonically up to some bound and we analytically constrain the times at which spikes can occur and estimate the bound on the number of spikes fired.
Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential.
Statistical methods in neuroscience; Membrane time constants; State dependent firing intensity; Ornstein–Uhlenbeck process; Square-root model; Synaptic fluctuations
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent studies demonstrate that spike correlations in recurrent neural
networks are considerably smaller than expected based on the amount of shared
presynaptic input. Here, we explain this observation by means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons.
We show that inhibitory feedback efficiently suppresses pairwise correlations
and, hence, population-rate fluctuations, thereby assigning inhibitory neurons
the new role of active decorrelation. We quantify this decorrelation by
comparing the responses of the intact recurrent network (feedback system) and
systems where the statistics of the feedback channel is perturbed (feedforward
system). Manipulations of the feedback statistics can lead to a significant
increase in the power and coherence of the population response. In particular,
neglecting correlations within the ensemble of feedback channels or between the
external stimulus and the feedback amplifies population-rate fluctuations by
orders of magnitude. The fluctuation suppression in homogeneous inhibitory
networks is explained by a negative feedback loop in the one-dimensional
dynamics of the compound activity. Similarly, a change of coordinates exposes an
effective negative feedback loop in the compound dynamics of stable
excitatory-inhibitory networks. The suppression of input correlations in finite
networks is explained by the population averaged correlations in the linear
network model: In purely inhibitory networks, shared-input correlations are
canceled by negative spike-train correlations. In excitatory-inhibitory
networks, spike-train correlations are typically positive. Here, the suppression
of input correlations is not a result of the mere existence of correlations
between excitatory (E) and inhibitory (I) neurons, but a consequence of a
particular structure of correlations among the three possible pairings (EE, EI,
The spatio-temporal activity pattern generated by a recurrent neuronal network
can provide a rich dynamical basis which allows readout neurons to generate a
variety of responses by tuning the synaptic weights of their inputs. The
repertoire of possible responses and the response reliability become maximal if
the spike trains of individual neurons are uncorrelated. Spike-train
correlations in cortical networks can indeed be very small, even for neighboring
neurons. This seems to be at odds with the finding that neighboring neurons
receive a considerable fraction of inputs from identical presynaptic sources
constituting an inevitable source of correlation. In this article, we show that
inhibitory feedback, abundant in biological neuronal networks, actively
suppresses correlations. The mechanism is generic: It does not depend on the
details of the network nodes and decorrelates networks composed of excitatory
and inhibitory neurons as well as purely inhibitory networks. For the case of
the leaky integrate-and-fire model, we derive the correlation structure
analytically. The new toolbox of formal linearization and a basis transformation
exposing the feedback component is applicable to a range of biological systems.
We confirm our analytical results by direct simulations.
Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex.
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Deciphering the encoding of information in the brain implies understanding how individual neurons emit action potentials (APs) in response to time-varying stimuli. This task is made difficult by two facts: (i) although the biophysics of AP generation are well understood, the dynamics of the membrane potential in response to a time-varying input are highly complex; (ii) the firing of APs in response to a given stimulus is inherently stochastic as only a fraction of the inputs to a neuron are directly controlled by the stimulus, the remaining being due to the fluctuating activity of the surrounding network. As a result, the input-output transform of individual neurons is often represented with the help of simplified phenomenological models that do not take into account the biophysical details. In this study, we directly relate a class of such phenomenological models, the so called linear-nonlinear models, with more biophysically detailed spiking neuron models. We provide a quantitative mapping between the two classes of models, and show that the linear-nonlinear models provide a good approximation of the input-output transform of spiking neurons, as long as the fluctuating inputs from the surrounding network are not exceedingly weak.
Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.
Spike-timing dependent plasticity; Short term plasticity; Modeling; Simulation; Learning
We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of “high” and “low”-firing activity. Depending on the overall excitability, transitions to the “high” state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the “high” state retains a “working memory” of a stimulus until well after its release. In the latter case, “high” states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated “corrupted” “high” states comprising neurons of both excitatory populations. Within a “basin of attraction,” the network dynamics “corrects” such states and re-establishes the prototypical “high” state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.
attractor dynamics; neuromorphic chips; working memory; stochastic dynamics; spiking neurons
We consider the problem of reconstructing finite energy stimuli encoded with a population of spiking leaky integrate-and-fire neurons. The reconstructed signal satisfies a consistency condition: when passed through the same neuron, it triggers the same spike train as the original stimulus. The recovered stimulus has to also minimize a quadratic smoothness optimality criterion. We formulate the reconstruction as a spline interpolation problem for scalar as well as vector valued stimuli and show that the recovery has a unique solution. We provide explicit reconstruction algorithms for stimuli encoded with single as well as a population of integrate-and-fire neurons. We demonstrate how our reconstruction algorithms can be applied to stimuli encoded with ON-OFF neural circuits with feedback. Finally, we extend the formalism to multi-input multi-output neural circuits and demonstrate that vector-valued finite energy signals can be efficiently encoded by a neural population provided that its size is beyond a threshold value. Examples are given that demonstrate the potential applications of our methodology to systems neuroscience and neuromorphic engineering.
We constructed a simulated spiking neural network model to investigate the effects of random background stimulation on the dynamics of network activity patterns and tetanus induced network plasticity. The simulated model was a “leaky integrate-and-fire” (LIF) neural model with spike-timing-dependent plasticity (STDP) and frequency-dependent synaptic depression. Spontaneous and evoked activity patterns were compared with those of living neuronal networks cultured on multi-electrode arrays. To help visualize activity patterns and plasticity in our simulated model, we introduced new population measures called Center of Activity (CA) and Center of Weights (CW) to describe the spatio-temporal dynamics of network-wide firing activity and network-wide synaptic strength, respectively. Without random background stimulation, the network synaptic weights were unstable and often drifted after tetanization. In contrast, with random background stimulation, the network synaptic weights remained close to their values immediately after tetanization. The simulation suggests that the effects of tetanization on network synaptic weights were difficult to control because of ongoing synchronized spontaneous bursts of action potentials, or “barrages.” Random background stimulation helped maintain network synaptic stability after tetanization by reducing the number and thus the influence of spontaneous barrages. We used our simulated network to model the interaction between ongoing neural activity, external stimulation and plasticity, and to guide our choice of sensory-motor mappings for adaptive behavior in hybrid neural-robotic systems or “hybrots.”
Cultured neural network; spike-timing-dependent plasticity (STDP); frequency-dependent depression; multi-electrode array (MEA); spatio-temporal dynamics; tetanization; model; plasticity; cortex; bursting; population coding
Correlation between spike trains or neurons sometimes indicates certain neural coding rules in the visual system. In this paper, the relationship between spike timing correlation and pattern correlation is discussed, and their ability to represent stimulus features is compared to examine their coding strategies not only in individual neurons but also in population. Two kinds of stimuli, natural movies and checkerboard, are used to arouse firing activities in chicken retinal ganglion cells. The spike timing correlation and pattern correlation are calculated by cross-correlation function and Lempel–Ziv distance respectively. According to the correlation values, it is demonstrated that spike trains with similar spike patterns are not necessarily concerted in firing time. Moreover, spike pattern correlation values between individual neurons’ responses reflect the difference of natural movies and checkerboard; neurons cooperate with each other with higher pattern correlation values which represent spatiotemporal correlations during response to natural movies. Spike timing does not reflect stimulus features as obvious as spike patterns, caused by their particular coding properties or physiological foundation. As a result, separating the pattern correlation out of traditional timing correlation concept uncover additional insight in neural coding.
Neural coding; Spike pattern correlation; Spike timing correlation; Cross-correlation; Lempel–Ziv distance
Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.
Spikes are the words that neurons use for communicating with one another through their networks. While individual cortical neurons generate highly irregular spike trains, coincidently arriving spikes are considered to exert a strong impact on postsynaptic-cell firing and hence to play an active role in neural information processing. However, little is known about whether computations by the brain benefit from such coincident spikes. Here, we show in a recurrent network model that coincident spikes embedded in random spike trains provide a neural code useful for highly accurate temporal integration of external input. In fact, the proposed neural integration is almost perfectly accurate in the mathematical sense. A wide range of cognitive behavior relies on temporal integration. For instance, it is a central player in sensory discrimination tasks and interval timing perception. Our model provides the neural basis for quantitative understanding of animal's decision behavior. In addition, it may account for why cortical activity shows a heterotic feature with irregular firing and synchronous spikes.
In vivo studies have shown that neurons in the neocortex can generate action potentials at high temporal precision. The mechanisms controlling timing and reliability of action potential generation in neocortical neurons, however, are still poorly understood. Here we investigated the temporal precision and reliability of spike firing in cortical layer V pyramidal cells at near-threshold membrane potentials. Timing and reliability of spike responses were a function of EPSC kinetics, temporal jitter of population excitatory inputs, and of background synaptic noise. We used somatic current injection to mimic population synaptic input events and measured spike probability and spike time precision (STP), the latter defined as the time window (Δt) holding 80% of response spikes. EPSC rise and decay times were varied over the known physiological spectrum. At spike threshold level, EPSC decay time had a stronger influence on STP than rise time. Generally, STP was highest (≤2.45 ms) in response to synchronous compounds of EPSCs with fast rise and decay kinetics. Compounds with slow EPSC kinetics (decay time constants>6 ms) triggered spikes at lower temporal precision (≥6.58 ms). We found an overall linear relationship between STP and spike delay. The difference in STP between fast and slow compound EPSCs could be reduced by incrementing the amplitude of slow compound EPSCs. The introduction of a temporal jitter to compound EPSCs had a comparatively small effect on STP, with a tenfold increase in jitter resulting in only a five fold decrease in STP. In the presence of simulated synaptic background activity, precisely timed spikes could still be induced by fast EPSCs, but not by slow EPSCs.