The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a ‘spike-by-spike’ online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Bayesian decoding; population coding; spiking neurons; approximate inference
The impulse discharge of single on-off neurons and a graded field potential, the proximal negative response (PNR), were simultaneously recorded with an extracellular microelectrode in the inner frog retina. Normalized amplitude-intensity functions for the on-response of the PNR and the neuron's post-stimulus time histogram (PSTH) were nearly coincident and typically showed a dynamic range spanning approximately 2 log units of intensity. Thus a nearly linear relation is found between the amplitude of the PNR and the neuron's PSTH. A neuron's PSTH amplitude and maximum instantaneous frequency of discharge were usually highly correlated, but occasional marked disparities indicate that temporal jitter of the first spike latency is an additional, relatively independent variable influencing PSTH amplitude. It typically changes by a factor of 20–30 over the intensity range. These and other findings have implications for the functional significance of the PNR and the PSTH, for a possible linear link between amacrine and on-off ganglion cells, and for a mechanism of intensity coding in which temporal jitter of latency exerts a major role.
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
striatum; computational modeling; inhibition; medium spiny neuron; cell assembly; population dynamics; spiking network
Recording single-neuron activity from a specific brain region across multiple trials in response to the same stimulus or execution of the same behavioral task is a common neurophysiology protocol. The raster plots of the spike trains often show strong between-trial and within-trial dynamics, yet the standard analysis of these data with the peristimulus time histogram (PSTH) and ANOVA do not consider between-trial dynamics. By itself, the PSTH does not provide a framework for statistical inference. We present a state-space generalized linear model (SS-GLM) to formulate a point process representation of between-trial and within-trial neural spiking dynamics. Our model has the PSTH as a special case. We provide a framework for model estimation, model selection, goodness-of-fit analysis, and inference. In an analysis of hippocampal neural activity recorded from a monkey performing a location-scene association task, we demonstrate how the SS-GLM may be used to answer frequently posed neurophysiological questions including, What is the nature of the between-trial and within-trial task-specific modulation of the neural spiking activity? How can we characterize learning-related neural dynamics? What are the timescales and characteristics of the neuron’s biophysical properties? Our results demonstrate that the SS-GLM is a more informative tool than the PSTH and ANOVA for analysis of multiple trial neural responses and that it provides a quantitative characterization of the between-trial and within-trial neural dynamics readily visible in raster plots, as well as the less apparent fast (1–10 ms), intermediate (11–20 ms), and longer (>20 ms) timescale features of the neuron’s biophysical properties.
Most neurons in the primary visual cortex initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. The functional consequences of adaptation are unclear. Typically a reduction of firing rate would reduce single neuron accuracy as less spikes are available for decoding, but it has been suggested that on the population level, adaptation increases coding accuracy. This question requires careful analysis as adaptation not only changes the firing rates of neurons, but also the neural variability and correlations between neurons, which affect coding accuracy as well. We calculate the coding accuracy using a computational model that implements two forms of adaptation: spike frequency adaptation and synaptic adaptation in the form of short-term synaptic plasticity. We find that the net effect of adaptation is subtle and heterogeneous. Depending on adaptation mechanism and test stimulus, adaptation can either increase or decrease coding accuracy. We discuss the neurophysiological and psychophysical implications of the findings and relate it to published experimental data.
Visual adaptation; Primary visual cortex; Population coding; Fisher Information; Cortical circuit; Computational model; Short-term synaptic depression; Spike-frequency adaptation
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
In vivo neural responses to stimuli are known to have a lot of variability across trials. If the same number of spikes is emitted from trial to trial, the neuron is said to be reliable. If the timing of such spikes is roughly preserved across trials, the neuron is said to be precise. Here we demonstrate both analytically and numerically that the well-established Hebbian learning rule of spike-timing-dependent plasticity (STDP) can learn response patterns despite relatively low reliability (Poisson-like variability) and low temporal precision (10–20 ms). These features are in line with many experimental observations, in which a poststimulus time histogram (PSTH) is evaluated over multiple trials. In our model, however, information is extracted from the relative spike times between afferents without the need of an absolute reference time, such as a stimulus onset. Relevantly, recent experiments show that relative timing is often more informative than the absolute timing. Furthermore, the scope of application for our study is not restricted to sensory systems. Taken together, our results suggest a fine temporal resolution for the neural code, and that STDP is an appropriate candidate for encoding and decoding such activity.
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modeling such data must also be developed. Here, we present a model for the type of data commonly recorded in early sensory pathways: responses to repeated trials of a sensory stimulus in which each neuron has it own time-varying spike rate (as described by its PSTH) and the dependencies between cells are characterized by both signal and noise correlations. This model is an extension of previous attempts to model population spike trains designed to control only the total correlation between cells. In our model, the response of each cell is represented as a binary vector given by the dichotomized sum of a deterministic “signal” that is repeated on each trial and a Gaussian random “noise” that is different on each trial. This model allows the simulation of population spike trains with PSTHs, trial-to-trial variability, and pairwise correlations that match those measured experimentally. Furthermore, the model also allows the noise correlations in the spike trains to be manipulated independently of the signal correlations and single-cell properties. To demonstrate the utility of the model, we use it to simulate and manipulate experimental responses from the mammalian auditory and visual systems. We also present a general form of the model in which both the signal and noise are Gaussian random processes, allowing the mean spike rate, trial-to-trial variability, and pairwise signal and noise correlations to be specified independently. Together, these methods for modeling spike trains comprise a potentially powerful set of tools for both theorists and experimentalists studying population responses in sensory systems.
population; correlation; noise correlation; simulation; model
At the single-neuron level, precisely timed spikes can either constitute firing-rate codes or spike-pattern codes that utilize the relative timing between consecutive spikes. There has been little experimental support for the hypothesis that such temporal patterns contribute substantially to information transmission. By using grasshopper auditory receptors as a model system, we show that correlations between spikes can be used to represent behaviorally relevant stimuli. The correlations reflect the inner structure of the spike train: a succession of burst-like patterns. We demonstrate that bursts with different spike counts encode different stimulus features, such that about 20% of the transmitted information corresponds to discriminating between different features, and the remaining 80% is used to allocate these features in time. In this spike-pattern code, the what and the when of the stimuli are encoded in the duration of each burst and the time of burst onset, respectively. Given the ubiquity of burst firing, we expect similar findings also for other neural systems.
burst spiking; neural code; sensory encoding; information theory; auditory receptor
Interpreting messages encoded in single neuronal responses requires knowing which features of the responses carry information.
That the number of spikes is an important part of the code has long been obvious. In recent years, it has been shown that modulation of the
firing rate with about 25 ms precision carries information that is not available from the total number of spikes across the whole response. It
has been proposed that patterns of exactly timed (1 ms precision) spikes, such as repeating triplets or quadruplets, might carry information that is not available from knowing about spike count and rate modulation. A model using the spike count distribution, the low pass
filtered PSTH (bandwidth below 30 Hz), and, to a small degree, the interspike interval distribution predicts the numbers and types of
exactly-timed triplets and quadruplets that are indistinguishable from those found in the data. From this it can be concluded that the coarse
(<30 Hz) sequential correlation structure over time gives rise to the exactly timed patterns present in the recorded spike trains. Because the coarse temporal structure predicts the fine temporal structure, the information carried by the fine temporal structure must be completely
redundant with that carried by the coarse structure. Thus, the existence of precisely timed spike patterns carrying stimulus-related information does not imply control of spike timing at precise time scales.
During an inspiration the output of hypoglossal (XII) motoneurons (HMs) in vitro is characterized by synchronous oscillatory firing in the 20 to 40 Hz range. In order to maintain synchronicity it is important that the cells fire with high reliability and precision. It is not known whether the intrinsic properties of HMs are tuned to maintain synchronicity when stimulated with time-varying inputs. We intracellularly recorded from HMs in an in vitro brainstem slice preparation from juvenile mice. Cells were held at or near spike threshold and were stimulated with steady or swept (ZAP) sine wave current functions (10 s duration; 0-40 Hz range). Peri-stimulus time histograms (PSTHs) were constructed from spike times based on threshold crossings. Synaptic transmission was suppressed by including blockers of GABAergic, glycinergic and glutamatergic neurotransmission in the bath solution. Cells responded to sine wave stimulation with bursts of action potentials at low (<3-5 Hz) sine wave frequency while they phase-locked 1:1 to the stimulus at intermediate frequencies (3-25 Hz). Beyond the 1:1 frequency range cells were able to phase-lock to sub-harmonics (1:2, 1:3 or 1:4) of the input frequency. The 1:1 phase-locking range increased with increasing stimulus amplitude and membrane depolarization. Reliability and spike timing precision was highest when the cells phase-locked 1:1 to the stimulus.
Our findings suggests that the coding of time-varying inspiratory synaptic inputs by individual HMs is most reliable and precise at frequencies that are generally lower than the frequency of the synchronous inspiratory oscillatory activity recorded from the XII nerve.
respiration; motoneuron; patch-clamp
This report introduces a system for the objective physiological classification of single-unit activity in the anteroventral cochlear nucleus (AVCN) of anesthetized CBA/129 and CBA/CaJ mice. As in previous studies, the decision criteria are based on the temporal properties of responses to short tone bursts that are visualized in the form of peri-stimulus time histograms (PSTHs). Individual unit types are defined by the statistical distribution of quantifiable metrics that relate to the onset latency, regularity, and adaptation of sound-driven discharge rates. Variations of these properties reflect the unique synaptic organizations and intrinsic membrane properties that dictate the selective tuning of sound coding in the AVCN. When these metrics are applied to the mouse AVCN, responses to best frequency (BF) tones reproduce the major PSTH patterns that have been previously demonstrated in other mammalian species. The consistency of response types in two genetically diverse strains of laboratory mice suggests that the present classification system is appropriate for additional strains with normal peripheral function. The general agreement of present findings to established classifications validates laboratory mice as an adequate model for general principles of mammalian sound coding. Nevertheless, important differences are noted for the reliability of specialized endbulb transmission within the AVCN, suggesting less secure temporal coding in this high-frequency species.
peri-stimulus time histograms; onset latency; regularity analysis; prepotential; rate adaptation
In simulating realistic neuronal circuitry composed of diverse types of neurons, we need an elemental spiking neuron model that is capable of not only quantitatively reproducing spike times of biological neurons given in vivo-like fluctuating inputs, but also qualitatively representing a variety of firing responses to transient current inputs. Simplistic models based on leaky integrate-and-fire mechanisms have demonstrated the ability to adapt to biological neurons. In particular, the multi-timescale adaptive threshold (MAT) model reproduces and predicts precise spike times of regular-spiking, intrinsic-bursting, and fast-spiking neurons, under any fluctuating current; however, this model is incapable of reproducing such specific firing responses as inhibitory rebound spiking and resonate spiking. In this paper, we augment the MAT model by adding a voltage dependency term to the adaptive threshold so that the model can exhibit the full variety of firing responses to various transient current pulses while maintaining the high adaptability inherent in the original MAT model. Furthermore, with this addition, our model is actually able to better predict spike times. Despite the augmentation, the model has only four free parameters and is implementable in an efficient algorithm for large-scale simulation due to its linearity, serving as an element neuron model in the simulation of realistic neuronal circuitry.
spiking neuron model; predicting spike times; reproducing firing patterns; leaky integrate-and-fire model; adaptive threshold; MAT model; voltage dependency; threshold variability
Use of spike timing to encode information requires that neurons respond with high temporal precision and with high reliability. Fast fluctuating stimuli are known to result in highly reproducible spike times across trials, whereas constant stimuli result in variable spike times. Here, we have investigated how spike-time reliability depends on the time scale of fluctuations of the input stimuli in real neurons (mitral cells in the olfactory bulb and pyramidal cells in the neocortex) as well as in neuron models (integrate-and-fire and Hodgkin-Huxley) with intrinsic noise. In all cases we found that for firing frequencies in the beta/gamma range, spike reliability is maximal when the input includes fluctuations on the time scale of a few milliseconds (2-5 ms), coinciding with the time scale of fast synapses, and decreases substantially for faster and slower inputs. In addition, we show mathematically that the existence of an optimal time scale for spike-time reliability is a general feature of neurons. Finally, we comment how these findings relate to the mechanisms that cause neuronal synchronization.
While oscillations of the local field potential (LFP) are commonly attributed to the synchronization of neuronal firing rate on the same time scale, their relationship to coincident spiking in the millisecond range is unknown. Here, we present experimental evidence to reconcile the notions of synchrony at the level of spiking and at the mesoscopic scale. We demonstrate that only in time intervals of significant spike synchrony that cannot be explained on the basis of firing rates, coincident spikes are better phase locked to the LFP than predicted by the locking of the individual spikes. This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations but contribute only a fraction of their spikes to temporally precise spike configurations. This finding provides direct evidence for the hypothesized relation that precise spike synchrony constitutes a major temporally and spatially organized component of the LFP.
motor cortex; oscillation; population signals; synchrony
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons.
With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit.
Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex.
Neurons in the ventral cochlear nucleus (VCN) that respond primarily at the onset of a pure tone stimulus show diversity in terms of peri-stimulus-time-histograms (PSTHs), rate-level functions, frequency tuning, and also their responses to broad band noise. A number of different mechanisms have been proposed as contributing to the onset characteristic: e.g. coincidence, depolarisation block, and low-threshold potassium currents. We show that a simple point neuron receiving convergent inputs from high-spontaneous rate auditory nerve (AN) fibers, with no special currents and no peri-stimulatory shifts in firing threshold, is sufficient to produce much of the diversity seen experimentally. Three sub-classes of onset PSTHs: onset-ideal (OI), onset-chopper (OC) and onset-locker (OL) are reproduced by variations in innervation patterns and dendritic filtering. The factors shaping responses were explored by systematically varying key parameters. An OI response in this model requires a narrow range of AN input best frequencies (BF) which only produce supra-threshold depolarizations during the stimulus onset. For OC and OL responses, receptive fields were wider. Considerable low pass filtering of AN inputs away from BF results in an OL, whilst relatively unfiltered inputs produce an OC response. Rate-level functions in response to pure tones can be sloping, or plateau. These can be also reproduced in the model by the manipulation of the AN innervation. The model supports the coincidence detection hypothesis, and suggests that differences in excitatory innervation and dendritic filtering constant are important factors to consider when accounting for the variation in response characteristics seen in VCN onset units.
Onset; Stellate; Cochlear nucleus; Point-neuron; PSTH; Rate-level functions
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
model fitting; electrophysiology; spiking models; simulation; GPU; distributed computing; adaptive threshold; optimization
Thalamic firing synchrony is thought to ensure selective transmission of relevant sensory information to the recipient cortical neurons by rendering them more responsive to temporally correlated input spikes. However, direct evidence for a synchrony code in the thalamus is limited. Here, we directly measure thalamic firing synchrony and its stimulus-induced modulation over time, using simultaneous single unit recordings from individual thalamic barreloids in the rat somatosensory whisker/barrel system. Employing whisker deflections varying in velocity or frequency and a cross-correlation approach, we find systematic changes in both time-course and strength of thalamic firing synchrony as a function of stimulus parameters and sensory adaptation. Synchrony develops faster and is greater with higher velocity deflections. Greater firing synchrony reflects stimulus-dependent increases in instantaneous firing rates, greater spike time precision relative to stimulus onset as well as common input that likely arises from divergent trigeminothalamic and corticothalamic neurons. With adaptation, synchrony decreases and takes longer to develop but is more dependent on the cells’ common inputs. Rapid, sharp increases in thalamic synchrony mirroring quick increases in whisker velocity occur also during ongoing random, high-frequency whisker vibrations. Together, results demonstrate millisecond by millisecond changes in thalamic near-synchronous firing during complex patterns of ongoing vibrissa movements that may ensure transmission of preferred sensory information in local thalamocortical circuits during whisking and active touch.
thalamocortical; firing synchrony; somatosensory; barreloid; barrel cortex; whiskers
Spike-frequency adaptation is known to enhance the transmission of information in sensory spiking neurons by rescaling the dynamic range for input processing, matching it to the temporal statistics of the sensory stimulus. Achieving maximal information transmission has also been recently postulated as a role for spike-timing-dependent plasticity (STDP). However, the link between optimal plasticity and STDP in cortex remains loose, as does the relationship between STDP and adaptation processes. We investigate how STDP, as described by recent minimal models derived from experimental data, influences the quality of information transmission in an adapting neuron. We show that a phenomenological model based on triplets of spikes yields almost the same information rate as an optimal model specially designed to this end. In contrast, the standard pair-based model of STDP does not improve information transmission as much. This result holds not only for additive STDP with hard weight bounds, known to produce bimodal distributions of synaptic weights, but also for weight-dependent STDP in the context of unimodal but skewed weight distributions. We analyze the similarities between the triplet model and the optimal learning rule, and find that the triplet effect is an important feature of the optimal model when the neuron is adaptive. If STDP is optimized for information transmission, it must take into account the dynamical properties of the postsynaptic cell, which might explain the target-cell specificity of STDP. In particular, it accounts for the differences found in vitro between STDP at excitatory synapses onto principal cells and those onto fast-spiking interneurons.
STDP; plasticity; spike-frequency adaptation; information theory; optimality
Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies1–3, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses4,5. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding6. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.
Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
decoding; spiking neurons; Bayesian inference; population coding; leaky integrate and fire
Many physiological responses elicited by neuronal spikes—intracellular calcium transients, synaptic potentials, muscle contractions—are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions—the elementary response kernel and additional kernels or functions that describe the dependence on previous history—that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the “synaptic decoding” approach of Sen et al. (J Neurosci 16:6307-6318, 1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms.
Motor control; spike trains; synaptic transmission; synaptic plasticity; neuromuscular; nonlinear system identification; neurophysiological input-output transform; mathematical modeling
Experimental and computational studies emphasize the role of the millisecond precision of neuronal spike times as an important coding mechanism for transmitting and representing information in the central nervous system. We investigate the spike time precision of a multicompartmental pyramidal neuron model of the CA3 region of the hippocampus under the influence of various sources of neuronal noise. We describe differences in the contribution to noise originating from voltage-gated ion channels, synaptic vesicle release, and vesicle quantal size. We analyze the effect of interspike intervals and the voltage course preceding the firing of spikes on the spike-timing jitter. The main finding of this study is the ranking of different noise sources according to their contribution to spike time precision. The most influential is synaptic vesicle release noise, causing the spike jitter to vary from 1 ms to 7 ms of a mean value 2.5 ms. Of second importance was the noise incurred by vesicle quantal size variation causing the spike time jitter to vary from 0.03 ms to 0.6 ms. Least influential was the voltage-gated channel noise generating spike jitter from 0.02 ms to 0.15 ms.
The contour of the postsynaptic potential (PSP) produced in a neurone by an afferent volley can be derived from the contour of the post-stimulus time histogram (PSTH) of that neurone when it is discharging rhythmically. In the present study the PSTH of the firing of individual soleus motor units after stimulation of the popliteal or peroneal nerve was used to explore the effects of extensor and flexor group I afferent volleys on the excitability of single soleus motoneurones in man. Extensor group I volleys resulted in an early peak of increased impulse density in the PSTH of 75% of soleus motoneurones. The latency suggests an analogy with the Ia EPSP. The mean duration of the peak of increased impulse density, equivalent to the rise time of the EPSP, was 3.6 ms. Flexor group I volleys result in a period of reduced impulse density in the PSTH of five out of nine soleus motoneurones. The latency suggests an analogy with the Ia IPSP. We conclude that this method could be used to explore the afferent connections to single motoneurones in man and to derive some of the characteristics of the postsynaptic potentials from a variety of afferent nerve fibres in single human motoneurones.