Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
In vivo neural responses to stimuli are known to have a lot of variability across trials. If the same number of spikes is emitted from trial to trial, the neuron is said to be reliable. If the timing of such spikes is roughly preserved across trials, the neuron is said to be precise. Here we demonstrate both analytically and numerically that the well-established Hebbian learning rule of spike-timing-dependent plasticity (STDP) can learn response patterns despite relatively low reliability (Poisson-like variability) and low temporal precision (10–20 ms). These features are in line with many experimental observations, in which a poststimulus time histogram (PSTH) is evaluated over multiple trials. In our model, however, information is extracted from the relative spike times between afferents without the need of an absolute reference time, such as a stimulus onset. Relevantly, recent experiments show that relative timing is often more informative than the absolute timing. Furthermore, the scope of application for our study is not restricted to sensory systems. Taken together, our results suggest a fine temporal resolution for the neural code, and that STDP is an appropriate candidate for encoding and decoding such activity.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
A sensory stimulus evokes activity in many neurons, creating a population response that must be “decoded” by the brain to estimate the parameters of that stimulus. Most decoding models have suggested complex neural circuits that compute optimal estimates of sensory parameters on the basis of responses in many sensory neurons. We propose a slightly suboptimal but practically simpler decoder. Decoding neurons integrate their inputs across 100 ms; incoming spikes are weighted by the preferred stimulus of the neuron of origin; and a local, cellular non-linearity approximates divisive normalization without dividing explicitly. The suboptimal decoder includes two simplifying approximations. It uses estimates of firing rate across the population rather than computing the total population response, and it implements divisive normalization with local cellular mechanisms of single neurons rather than more complicated neural circuit mechanisms. When applied to the practical problem of estimating target speed from a realistic simulation of the population response in extrastriate visual area MT, the suboptimal decoder has almost the same accuracy and precision as traditional decoding models. It succeeds in predicting the precision and imprecision of motor behavior using a suboptimal decoding computation because it adds only a small amount of imprecision to the code for target speed in MT, which is itself imprecise.
population decoding; divisive normalization; spike timing; MT; vector averaging
Recording single-neuron activity from a specific brain region across multiple trials in response to the same stimulus or execution of the same behavioral task is a common neurophysiology protocol. The raster plots of the spike trains often show strong between-trial and within-trial dynamics, yet the standard analysis of these data with the peristimulus time histogram (PSTH) and ANOVA do not consider between-trial dynamics. By itself, the PSTH does not provide a framework for statistical inference. We present a state-space generalized linear model (SS-GLM) to formulate a point process representation of between-trial and within-trial neural spiking dynamics. Our model has the PSTH as a special case. We provide a framework for model estimation, model selection, goodness-of-fit analysis, and inference. In an analysis of hippocampal neural activity recorded from a monkey performing a location-scene association task, we demonstrate how the SS-GLM may be used to answer frequently posed neurophysiological questions including, What is the nature of the between-trial and within-trial task-specific modulation of the neural spiking activity? How can we characterize learning-related neural dynamics? What are the timescales and characteristics of the neuron’s biophysical properties? Our results demonstrate that the SS-GLM is a more informative tool than the PSTH and ANOVA for analysis of multiple trial neural responses and that it provides a quantitative characterization of the between-trial and within-trial neural dynamics readily visible in raster plots, as well as the less apparent fast (1–10 ms), intermediate (11–20 ms), and longer (>20 ms) timescale features of the neuron’s biophysical properties.
A major open problem in systems neuroscience is to understand the relationship between behavior and the detailed spiking properties of neural populations. We assess how faithfully velocity information can be decoded from a population of spiking model retinal neurons whose spatiotemporal receptive fields and ensemble spike train dynamics are closely matched to real data. We describe how to compute the optimal Bayesian estimate of image velocity given the population spike train response and show that, in the case of global translation of an image with known intensity profile, on average the spike train ensemble signals speed with a fractional standard deviation of about 2% across a specific set of stimulus conditions. We further show how to compute the Bayesian velocity estimate in the case where we only have some a priori information about the (naturalistic) spatial correlation structure of the image but do not know the image explicitly. As expected, the performance of the Bayesian decoder is shown to be less accurate with decreasing prior image information. There turns out to be a close mathematical connection between a biologically plausible “motion energy” method for decoding the velocity and the Bayesian decoder in the case that the image is not known. Simulations using the motion energy method and the Bayesian decoder with unknown image reveal that they result in fractional standard deviations of 10% and 6%, respectively, across the same set of stimulus conditions. Estimation performance is rather insensitive to the details of the precise receptive field location, correlated activity between cells, and spike timing.
The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation.
Spike adaptation is a property of most neurons. When spike time adaptation occurs over multiple time scales, the dynamics can be described by a power-law. We study the computational properties of a leaky integrate-and-fire model with power-law adaptation. Instead of explicitly modeling the adaptation process by the contribution of slowly changing conductances, we use a fractional temporal derivative framework. The exponent of the fractional derivative represents the degree of adaptation of the membrane voltage, where 1 is the normal leaky integrator while values less than 1 produce increasing correlations in the voltage trace. The temporal correlation is interpreted as a memory trace that depends on the value of the fractional derivative. We identify the memory trace in the fractional model as the sum of the instantaneous differentiation weighted by a function that depends on the fractional exponent, and it provides non-local information to the incoming stimulus. The spiking dynamics of the fractional leaky integrate-and-fire model show memory dependence that can result in downward or upward spike adaptation. Our model provides a framework for understanding how long-range membrane voltage correlations affect spiking dynamics and information integration in neurons.
Sensory information is encoded in the response of neuronal populations. How might this information be decoded by downstream neurons? Here we analyzed the responses of simultaneously recorded barrel cortex neurons to sinusoidal vibrations of varying amplitudes preceded by three adapting stimuli of 0, 6 and 12 µm in amplitude. Using the framework of signal detection theory, we quantified the performance of a linear decoder which sums the responses of neurons after applying an optimum set of weights. Optimum weights were found by the analytical solution that maximized the average signal-to-noise ratio based on Fisher linear discriminant analysis. This provided a biologically plausible decoder that took into account the neuronal variability, covariability, and signal correlations. The optimal decoder achieved consistent improvement in discrimination performance over simple pooling. Decorrelating neuronal responses by trial shuffling revealed that, unlike pooling, the performance of the optimal decoder was minimally affected by noise correlation. In the non-adapted state, noise correlation enhanced the performance of the optimal decoder for some populations. Under adaptation, however, noise correlation always degraded the performance of the optimal decoder. Nonetheless, sensory adaptation improved the performance of the optimal decoder mainly by increasing signal correlation more than noise correlation. Adaptation induced little systematic change in the relative direction of signal and noise. Thus, a decoder which was optimized under the non-adapted state generalized well across states of adaptation.
In the natural environment, animals are constantly exposed to sensory stimulation. A key question in systems neuroscience is how attributes of a sensory stimulus can be “read out” from the activity of a population of brain cells. We chose to investigate this question in the whisker-mediated touch system of rats because of its well-established anatomy and exquisite functionality. The whisker system is one of the major channels through which rodents acquire sensory information about their surrounding environment. The response properties of brain cells dynamically adjust to the prevailing diet of sensory stimulation, a process termed sensory adaptation. Here, we applied a biologically plausible scheme whereby different brain cells contribute to sensory readout with different weights. We established the set of weights that provide the optimal readout under different states of adaptation. The results yield an upper bound for the efficiency of coding sensory information. We found that the ability to decode sensory information improves with adaptation. However, a readout mechanism that does not adjust to the state of adaptation can still perform remarkably well.
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a ‘spike-by-spike’ online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Bayesian decoding; population coding; spiking neurons; approximate inference
Interaural time differences (ITDs) are the major cue for localizing low-frequency sounds. The activity of neuronal populations in the brainstem encodes ITDs with an exquisite temporal acuity of about . The response of single neurons, however, also changes with other stimulus properties like the spectral composition of sound. The influence of stimulus frequency is very different across neurons and thus it is unclear how ITDs are encoded independently of stimulus frequency by populations of neurons. Here we fitted a statistical model to single-cell rate responses of the dorsal nucleus of the lateral lemniscus. The model was used to evaluate the impact of single-cell response characteristics on the frequency-invariant mutual information between rate response and ITD. We found a rough correspondence between the measured cell characteristics and those predicted by computing mutual information. Furthermore, we studied two readout mechanisms, a linear classifier and a two-channel rate difference decoder. The latter turned out to be better suited to decode the population patterns obtained from the fitted model.
Neuronal codes are usually studied by estimating how much information the brain activity carries about the stimulus. On a single cell level, the relevant features of neuronal activity such as the firing rate or spike timing are readily available. On a population level, where many neurons together encode a stimulus property, finding the most appropriate activity features is less obvious, particularly because the neurons respond with a huge cell-to-cell variability. Here, using the example of the neuronal representation of interaural time differences, we show that the quality of the population code strongly depends on the assumption — or the model — of the population readout. We argue that invariances are useful constraints to identify “good” population codes. Based on these ideas, we suggest that the representation of interaural time differences serves a two-channel code in which the difference between the summed activities of the neurons in the two hemispheres exhibits an invariant and linear dependence on interaural time difference.
Computational analyses have revealed that precisely timed spikes emitted by somatosensory cortical neuronal populations encode basic stimulus features in the rat's whisker sensory system. Efficient spike time based decoding schemes both for the spatial location of a stimulus and for the kinetic features of complex whisker movements have been defined. To date, these decoding schemes have been based upon spike times referenced to an external temporal frame – the time of the stimulus itself. Such schemes are limited by the requirement of precise knowledge of the stimulus time signal, and it is not clear whether stimulus times are known to rats making sensory judgments. Here, we first review studies of the information obtained from spike timing referenced to the stimulus time. Then we explore new methods for extracting spike train information independently of any external temporal reference frame. These proposed methods are based on the detection of stimulus-dependent differences in the firing time within a neuronal population. We apply them to a data set using single-whisker stimulation in anesthetized rats and find that stimulus site can be decoded based on the millisecond-range relative differences in spike times even without knowledge of stimulus time. If spike counts alone are measured over tens or hundreds of milliseconds rather than milliseconds, such decoders are much less effective. These results suggest that decoding schemes based on millisecond-precise spike times are likely to subserve robust and information-rich transmission of information in the somatosensory system.
information theory; somatosensation; neural coding; decoding; spike patterns; population coding
How can the central nervous system make accurate decisions about external stimuli
at short times on the basis of the noisy responses of nerve cell populations? It
has been suggested that spike time latency is the source of fast decisions.
Here, we propose a simple and fast readout mechanism, the temporal
Winner-Take-All (tWTA), and undertake a study of its accuracy. The tWTA is
studied in the framework of a statistical model for the dynamic response of a
nerve cell population to an external stimulus. Each cell is characterized by a
preferred stimulus, a unique value of the external stimulus for which it
responds fastest. The tWTA estimate for the stimulus is the preferred stimulus
of the cell that fired the first spike in the entire population. We then pose
the questions: How accurate is the tWTA readout? What are the parameters that
govern this accuracy? What are the effects of noise correlations and baseline
firing? We find that tWTA sensitivity to the stimulus grows algebraically fast
with the number of cells in the population, N, in contrast to
the logarithmic slow scaling of the conventional rate-WTA sensitivity with
N. Noise correlations in first-spike times of different
cells can limit the accuracy of the tWTA readout, even in the limit of large
N, similar to the effect that has been observed in
population coding theory. We show that baseline firing also has a detrimental
effect on tWTA accuracy. We suggest a generalization of the tWTA, the
n-tWTA, which estimates the stimulus by the identity of the
group of cells firing the first n spikes and show how this
simple generalization can overcome the detrimental effect of baseline firing.
Thus, the tWTA can provide fast and accurate responses discriminating between a
small number of alternatives. High accuracy in estimation of a continuous
stimulus can be obtained using the n-tWTA.
Considerable experimental as well as theoretical effort has been devoted to the
investigation of the neural code. The traditional approach has been to study the
information content of the total neural spike count during a long period of
time. However, in many cases, the central nervous system is required to estimate
the external stimulus at much shorter times. What readout mechanism could
account for such fast decisions? We suggest a readout mechanism that estimates
the external stimulus by the first spike in the population, the tWTA. We show
that the tWTA can account for accurate discriminations between a small number of
choices. We find that the accuracy of the tWTA is limited by the neuronal
baseline firing. We further find that, due to baseline firing, the single first
spike does not encode sufficient information for estimating a continuous
variable, such as the direction of motion of a visual stimulus, with fine
resolution. In such cases, fast and accurate decisions can be obtained by a
generalization of the tWTA to a readout that estimates the stimulus by the first
n spikes fire by the population, where n
is larger than the mean number of baseline spikes in the population.
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
striatum; computational modeling; inhibition; medium spiny neuron; cell assembly; population dynamics; spiking network
It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned wavelet-based decoders.
temporal coding; visual system; wavelet transform; pattern recognition; spike neural network; Haar wavelets
Channel noise is the dominant intrinsic noise source of neurons causing variability in the timing of action potentials and interspike intervals (ISI). Slow adaptation currents are observed in many cells and strongly shape response properties of neurons. These currents are mediated by finite populations of ionic channels and may thus carry a substantial noise component. Here we study the effect of such adaptation noise on the ISI statistics of an integrate-and-fire model neuron by means of analytical techniques and extensive numerical simulations. We contrast this stochastic adaptation with the commonly studied case of a fast fluctuating current noise and a deterministic adaptation current (corresponding to an infinite population of adaptation channels). We derive analytical approximations for the ISI density and ISI serial correlation coefficient for both cases. For fast fluctuations and deterministic adaptation, the ISI density is well approximated by an inverse Gaussian (IG) and the ISI correlations are negative. In marked contrast, for stochastic adaptation, the density is more peaked and has a heavier tail than an IG density and the serial correlations are positive. A numerical study of the mixed case where both fast fluctuations and adaptation channel noise are present reveals a smooth transition between the analytically tractable limiting cases. Our conclusions are furthermore supported by numerical simulations of a biophysically more realistic Hodgkin-Huxley type model. Our results could be used to infer the dominant source of noise in neurons from their ISI statistics.
Neurons of sensory systems encode signals from the environment by sequences of electrical pulses — so-called spikes. This coding of information is fundamentally limited by the presence of intrinsic neural noise. One major noise source is “channel noise” that is generated by the random activity of various types of ion channels in the cell membrane. Slow adaptation currents can also be a source of channel noise. Adaptation currents profoundly shape the signal transmission properties of a neuron by emphasizing fast changes in the stimulus but adapting the spiking frequency to slow stimulus components. Here, we mathematically analyze the effects of both slow adaptation channel noise and fast channel noise on the statistics of spike times in adapting neuron models. Surprisingly, the two noise sources result in qualitatively different distributions and correlations of time intervals between spikes. Our findings add a novel aspect to the function of adaptation currents and can also be used to experimentally distinguish adaptation noise and fast channel noise on the basis of spike sequences.
How neurons pay attention Top-down selective attention mediates feature selection by reducing the noise correlations in neural populations and enhancing the synchronized activity across subpopulations that encode the relevant features of sensory stimuli.
Studies in vision show that attention enhances the firing rates of cells when it is directed towards their preferred stimulus feature. However, it is unknown whether other sensory systems employ this mechanism to mediate feature selection within their modalities. Moreover, whether feature-based attention modulates the correlated activity of a population is unclear. Indeed, temporal correlation codes such as spike-synchrony and spike-count correlations (rsc) are believed to play a role in stimulus selection by increasing the signal and reducing the noise in a population, respectively. Here, we investigate (1) whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature, (2) the interplay between spike-synchrony and rsc during feature selection, and (3) whether feature attention effects are common across the visual and tactile systems. Single-unit recordings were made in secondary somatosensory cortex of three non-human primates while animals engaged in tactile feature (orientation and frequency) and visual discrimination tasks. We found that both firing rate and spike-synchrony between neurons with similar feature selectivity were enhanced when attention was directed towards their preferred feature. However, attention effects on spike-synchrony were twice as large as those on firing rate, and had a tighter relationship with behavioral performance. Further, we observed increased rsc when attention was directed towards the visual modality (i.e., away from touch). These data suggest that similar feature selection mechanisms are employed in vision and touch, and that temporal correlation codes such as spike-synchrony play a role in mediating feature selection. We posit that feature-based selection operates by implementing multiple mechanisms that reduce the overall noise levels in the neural population and synchronize activity across subpopulations that encode the relevant features of sensory stimuli.
Attention can select stimuli in space based on the stimulus features most relevant for a task. Attention effects have been linked to several important phenomena such as modulations in neuronal spiking rate (i.e., the average number of spikes per unit time) and spike-spike synchrony between neurons. Attention has also been associated with spike count correlations, a measure that is thought to reflect correlated noise in the population of neurons. Here, we studied whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature. Simultaneous single-unit recordings were obtained from multiple neurons in secondary somatosensory cortex in non-human primates performing feature-attention tasks. Both firing rate and spike-synchrony were enhanced when attention was directed towards the preferred feature of cells. However, attention effects on spike-synchrony had a tighter relationship with behavior. Further, attention decreased spike-count correlations when it was directed towards the receptive field of cells. Our data indicate that temporal correlation codes play a role in mediating feature selection, and are consistent with a feature-based selection model that operates by reducing the overall noise in a population and synchronizing activity across subpopulations that encode the relevant features of sensory stimuli.
Transduction of graded synaptic input into trains of all-or-none action
potentials (spikes) is a crucial step in neural coding. Hodgkin identified three
classes of neurons with qualitatively different analog-to-digital transduction
properties. Despite widespread use of this classification scheme, a
generalizable explanation of its biophysical basis has not been described. We
recorded from spinal sensory neurons representing each class and reproduced
their transduction properties in a minimal model. With phase plane and
bifurcation analysis, each class of excitability was shown to derive from
distinct spike initiating dynamics. Excitability could be converted between all
three classes by varying single parameters; moreover, several parameters, when
varied one at a time, had functionally equivalent effects on excitability. From
this, we conclude that the spike-initiating dynamics associated with each of
Hodgkin's classes represent different outcomes in a nonlinear
competition between oppositely directed, kinetically mismatched currents. Class
1 excitability occurs through a saddle node on invariant circle bifurcation when
net current at perithreshold potentials is inward (depolarizing) at steady
state. Class 2 excitability occurs through a Hopf bifurcation when, despite net
current being outward (hyperpolarizing) at steady state, spike initiation occurs
because inward current activates faster than outward current. Class 3
excitability occurs through a quasi-separatrix crossing when fast-activating
inward current overpowers slow-activating outward current during a stimulus
transient, although slow-activating outward current dominates during constant
stimulation. Experiments confirmed that different classes of spinal lamina I
neurons express the subthreshold currents predicted by our simulations and,
further, that those currents are necessary for the excitability in each cell
class. Thus, our results demonstrate that all three classes of excitability
arise from a continuum in the direction and magnitude of subthreshold currents.
Through detailed analysis of the spike-initiating process, we have explained a
fundamental link between biophysical properties and qualitative differences in
how neurons encode sensory input.
Information is transmitted through the nervous system in the form of action
potentials or spikes. Contrary to popular belief, a spike is not generated
instantaneously when membrane potential crosses some preordained threshold. In
fact, different neurons employ different rules to determine when and why they
spike. These different rules translate into diverse spiking patterns that have
been observed experimentally and replicated time and again in computational
models. In this study, our aim was not simply to replicate different spiking
patterns; instead, we sought to provide deeper insight into the connection
between biophysics and neural coding by relating each to the process of spike
initiation. We show that Hodgkin's three classes of excitability result
from a nonlinear competition between oppositely directed, kinetically mismatched
currents; the outcome of that competition is manifested as dynamically distinct
spike-initiating mechanisms. Our results highlight the benefits of forward
engineering minimal models capable of reproducing phenomena of interest and then
dissecting those models in order to identify general explanations of how those
phenomena arise. Furthermore, understanding nonlinear dynamical processes such
as spike initiation is crucial for definitively explaining how biophysical
properties impact neural coding.
We examined the extent to which temporal encoding may be implemented by single neurons in the cercal sensory system of the house cricket Acheta domesticus. We found that these neurons exhibit a greater-than-expected coding capacity, due in part to an increased precision in brief patterns of action potentials. We developed linear and non-linear models for decoding the activity of these neurons. We found that the stimuli associated with short-interval patterns of spikes (ISIs of 8 ms or less) could be predicted better by second-order models as compared to linear models. Finally, we characterized the difference between these linear and second-order models in a low-dimensional subspace, and showed that modification of the linear models along only a few dimensions improved their predictive power to parity with the second order models. Together these results show that single neurons are capable of using temporal patterns of spikes as fundamental symbols in their neural code, and that they communicate specific stimulus distributions to subsequent neural structures.
The information coding schemes used within nervous systems have been the focus of an entire field within neuroscience. An unresolved issue within the general coding problem is the determination of the neural “symbols” with which information is encoded in neural spike trains, analogous to the determination of the nucleotide sequences used to represent proteins in molecular biology. The goal of our study was to determine if pairs of consecutive action potentials contain more or different information about the stimuli that elicit them than would be predicted from an analysis of individual action potentials. We developed linear and non-linear coding models and used likelihood analysis to address this question for sensory interneurons in the cricket cercal sensory system. Our results show that these neurons' spike trains can be decomposed into sequences of two neural symbols: isolated single spikes and short-interval spike doublets. Given the ubiquitous nature of similar neural activity reported in other systems, we suspect that the implementation of such temporal encoding schemes may be widespread across animal phyla. Knowledge of the basic coding units used by single cells will help in building the large-scale neural network models necessary for understanding how nervous systems function.
The impulse discharge of single on-off neurons and a graded field potential, the proximal negative response (PNR), were simultaneously recorded with an extracellular microelectrode in the inner frog retina. Normalized amplitude-intensity functions for the on-response of the PNR and the neuron's post-stimulus time histogram (PSTH) were nearly coincident and typically showed a dynamic range spanning approximately 2 log units of intensity. Thus a nearly linear relation is found between the amplitude of the PNR and the neuron's PSTH. A neuron's PSTH amplitude and maximum instantaneous frequency of discharge were usually highly correlated, but occasional marked disparities indicate that temporal jitter of the first spike latency is an additional, relatively independent variable influencing PSTH amplitude. It typically changes by a factor of 20–30 over the intensity range. These and other findings have implications for the functional significance of the PNR and the PSTH, for a possible linear link between amacrine and on-off ganglion cells, and for a mechanism of intensity coding in which temporal jitter of latency exerts a major role.
Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies1–3, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses4,5. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding6. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.
While sensory neurons carry behaviorally relevant information in responses that often extend over hundreds of milliseconds, the key units of neural information likely consist of much shorter and temporally precise spike patterns. The mechanisms and temporal reference frames by which sensory networks partition responses into these shorter units of information remain unknown. One hypothesis holds that slow oscillations provide a network-intrinsic reference to temporally partitioned spike trains without exploiting the millisecond-precise alignment of spikes to sensory stimuli. We tested this hypothesis on neural responses recorded in visual and auditory cortices of macaque monkeys in response to natural stimuli. Comparing different schemes for response partitioning revealed that theta band oscillations provide a temporal reference that permits extracting significantly more information than can be obtained from spike counts, and sometimes almost as much information as obtained by partitioning spike trains using precisely stimulus-locked time bins. We further tested the robustness of these partitioning schemes to temporal uncertainty in the decoding process and to noise in the sensory input. This revealed that partitioning using an oscillatory reference provides greater robustness than partitioning using precisely stimulus-locked time bins. Overall, these results provide a computational proof of concept for the hypothesis that slow rhythmic network activity may serve as internal reference frame for information coding in sensory cortices and they foster the notion that slow oscillations serve as key elements for the computations underlying perception.
Neurons in sensory cortices encode objects in our sensory environment by varying the timing and number of action potentials that they emit. Brain networks that ‘decode’ this information need to partition those spike trains into their individual informative units. Experimenters achieve such partitioning by exploiting their knowledge about the millisecond precise timing of individual spikes relative to externally presented sensory stimuli. The brain, however, does not have access to this information and has to partition and decode spike trains using intrinsically available temporal reference frames. We show that slow (4–8 Hz) oscillatory network activity can provide such an intrinsic temporal reference. Specifically, we analyzed neural responses recorded in primary auditory and visual cortices. This revealed that the oscillatory reference frame performs nearly as well as the precise stimulus-locked reference frame and renders neural encoding robust to sensory noise and temporal uncertainty that naturally occurs during decoding. These findings provide a computational proof-of-concept that slow oscillatory network activity may serve the crucial function as temporal reference frame for sensory coding.
Sensory processing is associated with gamma frequency oscillations (30–80 Hz) in sensory cortices. This raises the question whether gamma oscillations can be directly involved in the representation of time-varying stimuli, including stimuli whose time scale is longer than a gamma cycle. We are interested in the ability of the system to reliably distinguish different stimuli while being robust to stimulus variations such as uniform time-warp. We address this issue with a dynamical model of spiking neurons and study the response to an asymmetric sawtooth input current over a range of shape parameters. These parameters describe how fast the input current rises and falls in time. Our network consists of inhibitory and excitatory populations that are sufficient for generating oscillations in the gamma range. The oscillations period is about one-third of the stimulus duration. Embedded in this network is a subpopulation of excitatory cells that respond to the sawtooth stimulus and a subpopulation of cells that respond to an onset cue. The intrinsic gamma oscillations generate a temporally sparse code for the external stimuli. In this code, an excitatory cell may fire a single spike during a gamma cycle, depending on its tuning properties and on the temporal structure of the specific input; the identity of the stimulus is coded by the list of excitatory cells that fire during each cycle. We quantify the properties of this representation in a series of simulations and show that the sparseness of the code makes it robust to uniform warping of the time scale. We find that resetting of the oscillation phase at stimulus onset is important for a reliable representation of the stimulus and that there is a tradeoff between the resolution of the neural representation of the stimulus and robustness to time-warp.
Sensory processing of time-varying stimuli, such as speech, is associated with high-frequency oscillatory cortical activity, the functional significance of which is still unknown. One possibility is that the oscillations are part of a stimulus-encoding mechanism. Here, we investigate a computational model of such a mechanism, a spiking neuronal network whose intrinsic oscillations interact with external input (waveforms simulating short speech segments in a single acoustic frequency band) to encode stimuli that extend over a time interval longer than the oscillation's period. The network implements a temporally sparse encoding, whose robustness to time warping and neuronal noise we quantify. To our knowledge, this study is the first to demonstrate that a biophysically plausible model of oscillations occurring in the processing of auditory input may generate a representation of signals that span multiple oscillation cycles.
The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin–Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements.
antennal lobe; temporal binding; computational neuroscience; odor coding; slow temporal patterns; oscillations; synchrony; time scales of inhibition
Simultaneous spike-counts of neural populations are typically modeled by a Gaussian distribution. On short time scales, however, this distribution is too restrictive to describe and analyze multivariate distributions of discrete spike-counts. We present an alternative that is based on copulas and can account for arbitrary marginal distributions, including Poisson and negative binomial distributions as well as second and higher-order interactions. We describe maximum likelihood-based procedures for fitting copula-based models to spike-count data, and we derive a so-called flashlight transformation which makes it possible to move the tail dependence of an arbitrary copula into an arbitrary orthant of the multivariate probability distribution. Mixtures of copulas that combine different dependence structures and thereby model different driving processes simultaneously are also introduced. First, we apply copula-based models to populations of integrate-and-fire neurons receiving partially correlated input and show that the best fitting copulas provide information about the functional connectivity of coupled neurons which can be extracted using the flashlight transformation. We then apply the new method to data which were recorded from macaque prefrontal cortex using a multi-tetrode array. We find that copula-based distributions with negative binomial marginals provide an appropriate stochastic model for the multivariate spike-count distributions rather than the multivariate Poisson latent variables distribution and the often used multivariate normal distribution. The dependence structure of these distributions provides evidence for common inhibitory input to all recorded stimulus encoding neurons. Finally, we show that copula-based models can be successfully used to evaluate neural codes, e.g., to characterize stimulus-dependent spike-count distributions with information measures. This demonstrates that copula-based models are not only a versatile class of models for multivariate distributions of spike-counts, but that those models can be exploited to understand functional dependencies.
The brain has an enormous number of neurons that do not work alone but in an ensemble. Yet, mostly individual neurons were measured in the past and therefore models were restricted to independent neurons. With the advent of new multi-electrode techniques, however, it becomes possible to measure a great number of neurons simultaneously. As a result, models of how populations of neurons co-vary are becoming increasingly important. Here, we describe such a framework based on so-called copulas. Copulas allow to separate the neural variation structure of the population from the variability of the individual neurons. Contrary to standard models, versatile dependence structures can be described using this approach. We explore what additional information is provided by the detailed dependence. For simulated neurons, we show that the variation structure of the population allows inference of the underlying connectivity structure of the neurons. The power of the approach is demonstrated on a memory experiment in macaque monkey. We show that our framework describes the measurements better than the standard models and identify possible network connections of the measured neurons.
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture.
Many lines of evidence suggest that few spikes carry the relevant stimulus information at later stages of sensory processing. Yet mechanisms for the emergence of a robust and temporally sparse sensory representation remain elusive. Here, we introduce an idea in which a temporal sparse and reliable stimulus representation develops naturally in spiking networks. It combines principles of signal propagation with the commonly observed mechanism of neuronal firing rate adaptation. Using a stringent numerical and mathematical approach, we show how a dense rate code at the periphery translates into a temporal sparse representation in the cortical network. At the same time, it dynamically suppresses trial-by-trial variability, matching experimental observations in sensory cortices. Computational modelling of the insects olfactory pathway suggests that the same principle underlies the prominent example of temporal sparse coding in the mushroom body. Our results reveal a computational principle that relates neuronal firing rate adaptation to temporal sparse coding and variability suppression in nervous systems.