Search tips
Search criteria

Results 1-25 (1325863)

Clipboard (0)

Related Articles

1.  STDP Allows Fast Rate-Modulated Coding with Poisson-Like Spike Trains 
PLoS Computational Biology  2011;7(10):e1002231.
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
Author Summary
In vivo neural responses to stimuli are known to have a lot of variability across trials. If the same number of spikes is emitted from trial to trial, the neuron is said to be reliable. If the timing of such spikes is roughly preserved across trials, the neuron is said to be precise. Here we demonstrate both analytically and numerically that the well-established Hebbian learning rule of spike-timing-dependent plasticity (STDP) can learn response patterns despite relatively low reliability (Poisson-like variability) and low temporal precision (10–20 ms). These features are in line with many experimental observations, in which a poststimulus time histogram (PSTH) is evaluated over multiple trials. In our model, however, information is extracted from the relative spike times between afferents without the need of an absolute reference time, such as a stimulus onset. Relevantly, recent experiments show that relative timing is often more informative than the absolute timing. Furthermore, the scope of application for our study is not restricted to sensory systems. Taken together, our results suggest a fine temporal resolution for the neural code, and that STDP is an appropriate candidate for encoding and decoding such activity.
PMCID: PMC3203056  PMID: 22046113
2.  Spike-Based Population Coding and Working Memory 
PLoS Computational Biology  2011;7(2):e1001080.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Author Summary
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
PMCID: PMC3040643  PMID: 21379319
3.  A thesaurus for a neural population code 
eLife  null;4:e06134.
Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns.
eLife digest
Our ability to perceive the world is dependent on information from our senses being passed between different parts of the brain. The information is encoded as patterns of electrical pulses or ‘spikes’, which other brain regions must be able to decipher. Cracking this code would thus enable us to predict the patterns of nerve impulses that would occur in response to specific stimuli, and ‘decode’ which stimuli had produced particular patterns of impulses.
This task is challenging in part because of its scale—vast numbers of stimuli are encoded by huge numbers of neurons that can send their spikes in many different combinations. Furthermore, neurons are inherently noisy and their response to identical stimuli may vary considerably in the number of spikes and their timing. This means that the brain cannot simply link a single unchanging pattern of firing with each stimulus, because these firing patterns are often distorted by biophysical noise.
Ganmor et al. have now modeled the effects of noise in a network of neurons in the retina (found at the back of the eye), and, in doing so, have provided insights into how the brain solves this problem. This has brought us a step closer to cracking the neural code. First, 10 second video clips of natural scenes and artificial stimuli were played on a loop to a sample of retina taken from a salamander, and the responses of nearly 100 neurons in the sample were recorded for two hours. Dividing the 10 second clip into short segments provided a series of 500 stimuli, which the network had been exposed to more than 600 times.
Ganmor et al. analyzed the responses of groups of 20 cells to each stimulus and found that physically similar firing patterns were not particularly likely to encode the same stimulus. This can be likened to the way that words such as ‘light’ and ‘night’ have similar structures but different meanings. Instead, the model reveals that each stimulus was represented by a cluster of firing patterns that bore little physical resemblance to one another, but which nevertheless conveyed the same meaning. To continue on with the previous example, this is similar to way that ‘light’ and ‘illumination’ have the same meaning but different structures.
Ganmor et al. use these new data to map the organization of the ‘vocabulary’ of populations of cells the retina, and put together a kind of ‘thesaurus’ that enables new activity patterns of the retina to be decoded and could be used to crack the neural code. Furthermore, the organization of ‘synonyms’ is strikingly similar to codes that are favored in many forms of telecommunication. In these man-made codes, codewords that represent different items are chosen to be so distinct from each other that even if they were corrupted by noise, they could be correctly deciphered. Correspondingly, in the retina, patterns that carry the same meaning occupy a distinct area, and new patterns can be interpreted based on their proximity to these clusters.
PMCID: PMC4562117  PMID: 26347983
neural code; information; noise; entropy; natural stimuli; metric; retina; salamander
4.  A neurally-efficient implementation of sensory population decoding 
A sensory stimulus evokes activity in many neurons, creating a population response that must be “decoded” by the brain to estimate the parameters of that stimulus. Most decoding models have suggested complex neural circuits that compute optimal estimates of sensory parameters on the basis of responses in many sensory neurons. We propose a slightly suboptimal but practically simpler decoder. Decoding neurons integrate their inputs across 100 ms; incoming spikes are weighted by the preferred stimulus of the neuron of origin; and a local, cellular non-linearity approximates divisive normalization without dividing explicitly. The suboptimal decoder includes two simplifying approximations. It uses estimates of firing rate across the population rather than computing the total population response, and it implements divisive normalization with local cellular mechanisms of single neurons rather than more complicated neural circuit mechanisms. When applied to the practical problem of estimating target speed from a realistic simulation of the population response in extrastriate visual area MT, the suboptimal decoder has almost the same accuracy and precision as traditional decoding models. It succeeds in predicting the precision and imprecision of motor behavior using a suboptimal decoding computation because it adds only a small amount of imprecision to the code for target speed in MT, which is itself imprecise.
PMCID: PMC3085943  PMID: 21451025
population decoding; divisive normalization; spike timing; MT; vector averaging
5.  Analysis of Between-Trial and Within-Trial Neural Spiking Dynamics 
Journal of neurophysiology  2008;99(5):2672-2693.
Recording single-neuron activity from a specific brain region across multiple trials in response to the same stimulus or execution of the same behavioral task is a common neurophysiology protocol. The raster plots of the spike trains often show strong between-trial and within-trial dynamics, yet the standard analysis of these data with the peristimulus time histogram (PSTH) and ANOVA do not consider between-trial dynamics. By itself, the PSTH does not provide a framework for statistical inference. We present a state-space generalized linear model (SS-GLM) to formulate a point process representation of between-trial and within-trial neural spiking dynamics. Our model has the PSTH as a special case. We provide a framework for model estimation, model selection, goodness-of-fit analysis, and inference. In an analysis of hippocampal neural activity recorded from a monkey performing a location-scene association task, we demonstrate how the SS-GLM may be used to answer frequently posed neurophysiological questions including, What is the nature of the between-trial and within-trial task-specific modulation of the neural spiking activity? How can we characterize learning-related neural dynamics? What are the timescales and characteristics of the neuron’s biophysical properties? Our results demonstrate that the SS-GLM is a more informative tool than the PSTH and ANOVA for analysis of multiple trial neural responses and that it provides a quantitative characterization of the between-trial and within-trial neural dynamics readily visible in raster plots, as well as the less apparent fast (1–10 ms), intermediate (11–20 ms), and longer (>20 ms) timescale features of the neuron’s biophysical properties.
PMCID: PMC2430469  PMID: 18216233
6.  Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model 
PLoS Computational Biology  2014;10(3):e1003526.
The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation.
Author Summary
Spike adaptation is a property of most neurons. When spike time adaptation occurs over multiple time scales, the dynamics can be described by a power-law. We study the computational properties of a leaky integrate-and-fire model with power-law adaptation. Instead of explicitly modeling the adaptation process by the contribution of slowly changing conductances, we use a fractional temporal derivative framework. The exponent of the fractional derivative represents the degree of adaptation of the membrane voltage, where 1 is the normal leaky integrator while values less than 1 produce increasing correlations in the voltage trace. The temporal correlation is interpreted as a memory trace that depends on the value of the fractional derivative. We identify the memory trace in the fractional model as the sum of the instantaneous differentiation weighted by a function that depends on the fractional exponent, and it provides non-local information to the incoming stimulus. The spiking dynamics of the fractional leaky integrate-and-fire model show memory dependence that can result in downward or upward spike adaptation. Our model provides a framework for understanding how long-range membrane voltage correlations affect spiking dynamics and information integration in neurons.
PMCID: PMC3967934  PMID: 24675903
7.  Bayesian Population Decoding of Spiking Neurons 
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a ‘spike-by-spike’ online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
PMCID: PMC2790948  PMID: 20011217
Bayesian decoding; population coding; spiking neurons; approximate inference
8.  Population Decoding in Rat Barrel Cortex: Optimizing the Linear Readout of Correlated Population Responses 
PLoS Computational Biology  2014;10(1):e1003415.
Sensory information is encoded in the response of neuronal populations. How might this information be decoded by downstream neurons? Here we analyzed the responses of simultaneously recorded barrel cortex neurons to sinusoidal vibrations of varying amplitudes preceded by three adapting stimuli of 0, 6 and 12 µm in amplitude. Using the framework of signal detection theory, we quantified the performance of a linear decoder which sums the responses of neurons after applying an optimum set of weights. Optimum weights were found by the analytical solution that maximized the average signal-to-noise ratio based on Fisher linear discriminant analysis. This provided a biologically plausible decoder that took into account the neuronal variability, covariability, and signal correlations. The optimal decoder achieved consistent improvement in discrimination performance over simple pooling. Decorrelating neuronal responses by trial shuffling revealed that, unlike pooling, the performance of the optimal decoder was minimally affected by noise correlation. In the non-adapted state, noise correlation enhanced the performance of the optimal decoder for some populations. Under adaptation, however, noise correlation always degraded the performance of the optimal decoder. Nonetheless, sensory adaptation improved the performance of the optimal decoder mainly by increasing signal correlation more than noise correlation. Adaptation induced little systematic change in the relative direction of signal and noise. Thus, a decoder which was optimized under the non-adapted state generalized well across states of adaptation.
Author Summary
In the natural environment, animals are constantly exposed to sensory stimulation. A key question in systems neuroscience is how attributes of a sensory stimulus can be “read out” from the activity of a population of brain cells. We chose to investigate this question in the whisker-mediated touch system of rats because of its well-established anatomy and exquisite functionality. The whisker system is one of the major channels through which rodents acquire sensory information about their surrounding environment. The response properties of brain cells dynamically adjust to the prevailing diet of sensory stimulation, a process termed sensory adaptation. Here, we applied a biologically plausible scheme whereby different brain cells contribute to sensory readout with different weights. We established the set of weights that provide the optimal readout under different states of adaptation. The results yield an upper bound for the efficiency of coding sensory information. We found that the ability to decode sensory information improves with adaptation. However, a readout mechanism that does not adjust to the state of adaptation can still perform remarkably well.
PMCID: PMC3879135  PMID: 24391487
9.  Frequency-Invariant Representation of Interaural Time Differences in Mammals 
PLoS Computational Biology  2011;7(3):e1002013.
Interaural time differences (ITDs) are the major cue for localizing low-frequency sounds. The activity of neuronal populations in the brainstem encodes ITDs with an exquisite temporal acuity of about . The response of single neurons, however, also changes with other stimulus properties like the spectral composition of sound. The influence of stimulus frequency is very different across neurons and thus it is unclear how ITDs are encoded independently of stimulus frequency by populations of neurons. Here we fitted a statistical model to single-cell rate responses of the dorsal nucleus of the lateral lemniscus. The model was used to evaluate the impact of single-cell response characteristics on the frequency-invariant mutual information between rate response and ITD. We found a rough correspondence between the measured cell characteristics and those predicted by computing mutual information. Furthermore, we studied two readout mechanisms, a linear classifier and a two-channel rate difference decoder. The latter turned out to be better suited to decode the population patterns obtained from the fitted model.
Author Summary
Neuronal codes are usually studied by estimating how much information the brain activity carries about the stimulus. On a single cell level, the relevant features of neuronal activity such as the firing rate or spike timing are readily available. On a population level, where many neurons together encode a stimulus property, finding the most appropriate activity features is less obvious, particularly because the neurons respond with a huge cell-to-cell variability. Here, using the example of the neuronal representation of interaural time differences, we show that the quality of the population code strongly depends on the assumption — or the model — of the population readout. We argue that invariances are useful constraints to identify “good” population codes. Based on these ideas, we suggest that the representation of interaural time differences serves a two-channel code in which the difference between the summed activities of the neurons in the two hemispheres exhibits an invariant and linear dependence on interaural time difference.
PMCID: PMC3060160  PMID: 21445227
10.  Information Carried by Population Spike Times in the Whisker Sensory Cortex can be Decoded Without Knowledge of Stimulus Time 
Computational analyses have revealed that precisely timed spikes emitted by somatosensory cortical neuronal populations encode basic stimulus features in the rat's whisker sensory system. Efficient spike time based decoding schemes both for the spatial location of a stimulus and for the kinetic features of complex whisker movements have been defined. To date, these decoding schemes have been based upon spike times referenced to an external temporal frame – the time of the stimulus itself. Such schemes are limited by the requirement of precise knowledge of the stimulus time signal, and it is not clear whether stimulus times are known to rats making sensory judgments. Here, we first review studies of the information obtained from spike timing referenced to the stimulus time. Then we explore new methods for extracting spike train information independently of any external temporal reference frame. These proposed methods are based on the detection of stimulus-dependent differences in the firing time within a neuronal population. We apply them to a data set using single-whisker stimulation in anesthetized rats and find that stimulus site can be decoded based on the millisecond-range relative differences in spike times even without knowledge of stimulus time. If spike counts alone are measured over tens or hundreds of milliseconds rather than milliseconds, such decoders are much less effective. These results suggest that decoding schemes based on millisecond-precise spike times are likely to subserve robust and information-rich transmission of information in the somatosensory system.
PMCID: PMC3059688  PMID: 21423503
information theory; somatosensation; neural coding; decoding; spike patterns; population coding
11.  The Temporal Winner-Take-All Readout 
PLoS Computational Biology  2009;5(2):e1000286.
How can the central nervous system make accurate decisions about external stimuli at short times on the basis of the noisy responses of nerve cell populations? It has been suggested that spike time latency is the source of fast decisions. Here, we propose a simple and fast readout mechanism, the temporal Winner-Take-All (tWTA), and undertake a study of its accuracy. The tWTA is studied in the framework of a statistical model for the dynamic response of a nerve cell population to an external stimulus. Each cell is characterized by a preferred stimulus, a unique value of the external stimulus for which it responds fastest. The tWTA estimate for the stimulus is the preferred stimulus of the cell that fired the first spike in the entire population. We then pose the questions: How accurate is the tWTA readout? What are the parameters that govern this accuracy? What are the effects of noise correlations and baseline firing? We find that tWTA sensitivity to the stimulus grows algebraically fast with the number of cells in the population, N, in contrast to the logarithmic slow scaling of the conventional rate-WTA sensitivity with N. Noise correlations in first-spike times of different cells can limit the accuracy of the tWTA readout, even in the limit of large N, similar to the effect that has been observed in population coding theory. We show that baseline firing also has a detrimental effect on tWTA accuracy. We suggest a generalization of the tWTA, the n-tWTA, which estimates the stimulus by the identity of the group of cells firing the first n spikes and show how this simple generalization can overcome the detrimental effect of baseline firing. Thus, the tWTA can provide fast and accurate responses discriminating between a small number of alternatives. High accuracy in estimation of a continuous stimulus can be obtained using the n-tWTA.
Author Summary
Considerable experimental as well as theoretical effort has been devoted to the investigation of the neural code. The traditional approach has been to study the information content of the total neural spike count during a long period of time. However, in many cases, the central nervous system is required to estimate the external stimulus at much shorter times. What readout mechanism could account for such fast decisions? We suggest a readout mechanism that estimates the external stimulus by the first spike in the population, the tWTA. We show that the tWTA can account for accurate discriminations between a small number of choices. We find that the accuracy of the tWTA is limited by the neuronal baseline firing. We further find that, due to baseline firing, the single first spike does not encode sufficient information for estimating a continuous variable, such as the direction of motion of a visual stimulus, with fine resolution. In such cases, fast and accurate decisions can be obtained by a generalization of the tWTA to a readout that estimates the stimulus by the first n spikes fire by the population, where n is larger than the mean number of baseline spikes in the population.
PMCID: PMC2633619  PMID: 19229309
12.  Input Dependent Cell Assembly Dynamics in a Model of the Striatal Medium Spiny Neuron Network 
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
PMCID: PMC3306002  PMID: 22438838
striatum; computational modeling; inhibition; medium spiny neuron; cell assembly; population dynamics; spiking network
13.  A wavelet-based neural model to optimize and read out a temporal population code 
It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned wavelet-based decoders.
PMCID: PMC3342589  PMID: 22563314
temporal coding; visual system; wavelet transform; pattern recognition; spike neural network; Haar wavelets
14.  How Noisy Adaptation of Neurons Shapes Interspike Interval Histograms and Correlations 
PLoS Computational Biology  2010;6(12):e1001026.
Channel noise is the dominant intrinsic noise source of neurons causing variability in the timing of action potentials and interspike intervals (ISI). Slow adaptation currents are observed in many cells and strongly shape response properties of neurons. These currents are mediated by finite populations of ionic channels and may thus carry a substantial noise component. Here we study the effect of such adaptation noise on the ISI statistics of an integrate-and-fire model neuron by means of analytical techniques and extensive numerical simulations. We contrast this stochastic adaptation with the commonly studied case of a fast fluctuating current noise and a deterministic adaptation current (corresponding to an infinite population of adaptation channels). We derive analytical approximations for the ISI density and ISI serial correlation coefficient for both cases. For fast fluctuations and deterministic adaptation, the ISI density is well approximated by an inverse Gaussian (IG) and the ISI correlations are negative. In marked contrast, for stochastic adaptation, the density is more peaked and has a heavier tail than an IG density and the serial correlations are positive. A numerical study of the mixed case where both fast fluctuations and adaptation channel noise are present reveals a smooth transition between the analytically tractable limiting cases. Our conclusions are furthermore supported by numerical simulations of a biophysically more realistic Hodgkin-Huxley type model. Our results could be used to infer the dominant source of noise in neurons from their ISI statistics.
Author Summary
Neurons of sensory systems encode signals from the environment by sequences of electrical pulses — so-called spikes. This coding of information is fundamentally limited by the presence of intrinsic neural noise. One major noise source is “channel noise” that is generated by the random activity of various types of ion channels in the cell membrane. Slow adaptation currents can also be a source of channel noise. Adaptation currents profoundly shape the signal transmission properties of a neuron by emphasizing fast changes in the stimulus but adapting the spiking frequency to slow stimulus components. Here, we mathematically analyze the effects of both slow adaptation channel noise and fast channel noise on the statistics of spike times in adapting neuron models. Surprisingly, the two noise sources result in qualitatively different distributions and correlations of time intervals between spikes. Our findings add a novel aspect to the function of adaptation currents and can also be used to experimentally distinguish adaptation noise and fast channel noise on the basis of spike sequences.
PMCID: PMC3002986  PMID: 21187900
15.  Temporal Correlation Mechanisms and Their Role in Feature Selection: A Single-Unit Study in Primate Somatosensory Cortex 
PLoS Biology  2014;12(11):e1002004.
How neurons pay attention Top-down selective attention mediates feature selection by reducing the noise correlations in neural populations and enhancing the synchronized activity across subpopulations that encode the relevant features of sensory stimuli.
Studies in vision show that attention enhances the firing rates of cells when it is directed towards their preferred stimulus feature. However, it is unknown whether other sensory systems employ this mechanism to mediate feature selection within their modalities. Moreover, whether feature-based attention modulates the correlated activity of a population is unclear. Indeed, temporal correlation codes such as spike-synchrony and spike-count correlations (rsc) are believed to play a role in stimulus selection by increasing the signal and reducing the noise in a population, respectively. Here, we investigate (1) whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature, (2) the interplay between spike-synchrony and rsc during feature selection, and (3) whether feature attention effects are common across the visual and tactile systems. Single-unit recordings were made in secondary somatosensory cortex of three non-human primates while animals engaged in tactile feature (orientation and frequency) and visual discrimination tasks. We found that both firing rate and spike-synchrony between neurons with similar feature selectivity were enhanced when attention was directed towards their preferred feature. However, attention effects on spike-synchrony were twice as large as those on firing rate, and had a tighter relationship with behavioral performance. Further, we observed increased rsc when attention was directed towards the visual modality (i.e., away from touch). These data suggest that similar feature selection mechanisms are employed in vision and touch, and that temporal correlation codes such as spike-synchrony play a role in mediating feature selection. We posit that feature-based selection operates by implementing multiple mechanisms that reduce the overall noise levels in the neural population and synchronize activity across subpopulations that encode the relevant features of sensory stimuli.
Author Summary
Attention can select stimuli in space based on the stimulus features most relevant for a task. Attention effects have been linked to several important phenomena such as modulations in neuronal spiking rate (i.e., the average number of spikes per unit time) and spike-spike synchrony between neurons. Attention has also been associated with spike count correlations, a measure that is thought to reflect correlated noise in the population of neurons. Here, we studied whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature. Simultaneous single-unit recordings were obtained from multiple neurons in secondary somatosensory cortex in non-human primates performing feature-attention tasks. Both firing rate and spike-synchrony were enhanced when attention was directed towards the preferred feature of cells. However, attention effects on spike-synchrony had a tighter relationship with behavior. Further, attention decreased spike-count correlations when it was directed towards the receptive field of cells. Our data indicate that temporal correlation codes play a role in mediating feature selection, and are consistent with a feature-based selection model that operates by reducing the overall noise in a population and synchronizing activity across subpopulations that encode the relevant features of sensory stimuli.
PMCID: PMC4244037  PMID: 25423284
16.  A stimulus-dependent spike threshold is an optimal neural coder 
A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code.
PMCID: PMC4451370  PMID: 26082710
neural coding; coding fidelity; energy-efficient coding; dynamic threshold; spike-timing; source coding; spike-threshold
17.  Biophysical Basis for Three Distinct Dynamical Mechanisms of Action Potential Initiation 
PLoS Computational Biology  2008;4(10):e1000198.
Transduction of graded synaptic input into trains of all-or-none action potentials (spikes) is a crucial step in neural coding. Hodgkin identified three classes of neurons with qualitatively different analog-to-digital transduction properties. Despite widespread use of this classification scheme, a generalizable explanation of its biophysical basis has not been described. We recorded from spinal sensory neurons representing each class and reproduced their transduction properties in a minimal model. With phase plane and bifurcation analysis, each class of excitability was shown to derive from distinct spike initiating dynamics. Excitability could be converted between all three classes by varying single parameters; moreover, several parameters, when varied one at a time, had functionally equivalent effects on excitability. From this, we conclude that the spike-initiating dynamics associated with each of Hodgkin's classes represent different outcomes in a nonlinear competition between oppositely directed, kinetically mismatched currents. Class 1 excitability occurs through a saddle node on invariant circle bifurcation when net current at perithreshold potentials is inward (depolarizing) at steady state. Class 2 excitability occurs through a Hopf bifurcation when, despite net current being outward (hyperpolarizing) at steady state, spike initiation occurs because inward current activates faster than outward current. Class 3 excitability occurs through a quasi-separatrix crossing when fast-activating inward current overpowers slow-activating outward current during a stimulus transient, although slow-activating outward current dominates during constant stimulation. Experiments confirmed that different classes of spinal lamina I neurons express the subthreshold currents predicted by our simulations and, further, that those currents are necessary for the excitability in each cell class. Thus, our results demonstrate that all three classes of excitability arise from a continuum in the direction and magnitude of subthreshold currents. Through detailed analysis of the spike-initiating process, we have explained a fundamental link between biophysical properties and qualitative differences in how neurons encode sensory input.
Author Summary
Information is transmitted through the nervous system in the form of action potentials or spikes. Contrary to popular belief, a spike is not generated instantaneously when membrane potential crosses some preordained threshold. In fact, different neurons employ different rules to determine when and why they spike. These different rules translate into diverse spiking patterns that have been observed experimentally and replicated time and again in computational models. In this study, our aim was not simply to replicate different spiking patterns; instead, we sought to provide deeper insight into the connection between biophysics and neural coding by relating each to the process of spike initiation. We show that Hodgkin's three classes of excitability result from a nonlinear competition between oppositely directed, kinetically mismatched currents; the outcome of that competition is manifested as dynamically distinct spike-initiating mechanisms. Our results highlight the benefits of forward engineering minimal models capable of reproducing phenomena of interest and then dissecting those models in order to identify general explanations of how those phenomena arise. Furthermore, understanding nonlinear dynamical processes such as spike initiation is crucial for definitively explaining how biophysical properties impact neural coding.
PMCID: PMC2551735  PMID: 18846205
18.  Temporal Encoding in a Nervous System 
PLoS Computational Biology  2011;7(5):e1002041.
We examined the extent to which temporal encoding may be implemented by single neurons in the cercal sensory system of the house cricket Acheta domesticus. We found that these neurons exhibit a greater-than-expected coding capacity, due in part to an increased precision in brief patterns of action potentials. We developed linear and non-linear models for decoding the activity of these neurons. We found that the stimuli associated with short-interval patterns of spikes (ISIs of 8 ms or less) could be predicted better by second-order models as compared to linear models. Finally, we characterized the difference between these linear and second-order models in a low-dimensional subspace, and showed that modification of the linear models along only a few dimensions improved their predictive power to parity with the second order models. Together these results show that single neurons are capable of using temporal patterns of spikes as fundamental symbols in their neural code, and that they communicate specific stimulus distributions to subsequent neural structures.
Author Summary
The information coding schemes used within nervous systems have been the focus of an entire field within neuroscience. An unresolved issue within the general coding problem is the determination of the neural “symbols” with which information is encoded in neural spike trains, analogous to the determination of the nucleotide sequences used to represent proteins in molecular biology. The goal of our study was to determine if pairs of consecutive action potentials contain more or different information about the stimuli that elicit them than would be predicted from an analysis of individual action potentials. We developed linear and non-linear coding models and used likelihood analysis to address this question for sensory interneurons in the cricket cercal sensory system. Our results show that these neurons' spike trains can be decomposed into sequences of two neural symbols: isolated single spikes and short-interval spike doublets. Given the ubiquitous nature of similar neural activity reported in other systems, we suspect that the implementation of such temporal encoding schemes may be widespread across animal phyla. Knowledge of the basic coding units used by single cells will help in building the large-scale neural network models necessary for understanding how nervous systems function.
PMCID: PMC3088658  PMID: 21573206
19.  Intensity Coding in the Frog Retina  
The Journal of General Physiology  1973;61(3):305-322.
The impulse discharge of single on-off neurons and a graded field potential, the proximal negative response (PNR), were simultaneously recorded with an extracellular microelectrode in the inner frog retina. Normalized amplitude-intensity functions for the on-response of the PNR and the neuron's post-stimulus time histogram (PSTH) were nearly coincident and typically showed a dynamic range spanning approximately 2 log units of intensity. Thus a nearly linear relation is found between the amplitude of the PNR and the neuron's PSTH. A neuron's PSTH amplitude and maximum instantaneous frequency of discharge were usually highly correlated, but occasional marked disparities indicate that temporal jitter of the first spike latency is an additional, relatively independent variable influencing PSTH amplitude. It typically changes by a factor of 20–30 over the intensity range. These and other findings have implications for the functional significance of the PNR and the PSTH, for a possible linear link between amacrine and on-off ganglion cells, and for a mechanism of intensity coding in which temporal jitter of latency exerts a major role.
PMCID: PMC2203456  PMID: 4540179
20.  Spatio-temporal correlations and visual signalling in a complete neuronal population 
Nature  2008;454(7207):995-999.
Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies1–3, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses4,5. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding6. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.
PMCID: PMC2684455  PMID: 18650810
21.  Representation of Time-Varying Stimuli by a Network Exhibiting Oscillations on a Faster Time Scale 
PLoS Computational Biology  2009;5(5):e1000370.
Sensory processing is associated with gamma frequency oscillations (30–80 Hz) in sensory cortices. This raises the question whether gamma oscillations can be directly involved in the representation of time-varying stimuli, including stimuli whose time scale is longer than a gamma cycle. We are interested in the ability of the system to reliably distinguish different stimuli while being robust to stimulus variations such as uniform time-warp. We address this issue with a dynamical model of spiking neurons and study the response to an asymmetric sawtooth input current over a range of shape parameters. These parameters describe how fast the input current rises and falls in time. Our network consists of inhibitory and excitatory populations that are sufficient for generating oscillations in the gamma range. The oscillations period is about one-third of the stimulus duration. Embedded in this network is a subpopulation of excitatory cells that respond to the sawtooth stimulus and a subpopulation of cells that respond to an onset cue. The intrinsic gamma oscillations generate a temporally sparse code for the external stimuli. In this code, an excitatory cell may fire a single spike during a gamma cycle, depending on its tuning properties and on the temporal structure of the specific input; the identity of the stimulus is coded by the list of excitatory cells that fire during each cycle. We quantify the properties of this representation in a series of simulations and show that the sparseness of the code makes it robust to uniform warping of the time scale. We find that resetting of the oscillation phase at stimulus onset is important for a reliable representation of the stimulus and that there is a tradeoff between the resolution of the neural representation of the stimulus and robustness to time-warp.
Author Summary
Sensory processing of time-varying stimuli, such as speech, is associated with high-frequency oscillatory cortical activity, the functional significance of which is still unknown. One possibility is that the oscillations are part of a stimulus-encoding mechanism. Here, we investigate a computational model of such a mechanism, a spiking neuronal network whose intrinsic oscillations interact with external input (waveforms simulating short speech segments in a single acoustic frequency band) to encode stimuli that extend over a time interval longer than the oscillation's period. The network implements a temporally sparse encoding, whose robustness to time warping and neuronal noise we quantify. To our knowledge, this study is the first to demonstrate that a biophysically plausible model of oscillations occurring in the processing of auditory input may generate a representation of signals that span multiple oscillation cycles.
PMCID: PMC2671161  PMID: 19412531
22.  Analysis of Slow (Theta) Oscillations as a Potential Temporal Reference Frame for Information Coding in Sensory Cortices 
PLoS Computational Biology  2012;8(10):e1002717.
While sensory neurons carry behaviorally relevant information in responses that often extend over hundreds of milliseconds, the key units of neural information likely consist of much shorter and temporally precise spike patterns. The mechanisms and temporal reference frames by which sensory networks partition responses into these shorter units of information remain unknown. One hypothesis holds that slow oscillations provide a network-intrinsic reference to temporally partitioned spike trains without exploiting the millisecond-precise alignment of spikes to sensory stimuli. We tested this hypothesis on neural responses recorded in visual and auditory cortices of macaque monkeys in response to natural stimuli. Comparing different schemes for response partitioning revealed that theta band oscillations provide a temporal reference that permits extracting significantly more information than can be obtained from spike counts, and sometimes almost as much information as obtained by partitioning spike trains using precisely stimulus-locked time bins. We further tested the robustness of these partitioning schemes to temporal uncertainty in the decoding process and to noise in the sensory input. This revealed that partitioning using an oscillatory reference provides greater robustness than partitioning using precisely stimulus-locked time bins. Overall, these results provide a computational proof of concept for the hypothesis that slow rhythmic network activity may serve as internal reference frame for information coding in sensory cortices and they foster the notion that slow oscillations serve as key elements for the computations underlying perception.
Author Summary
Neurons in sensory cortices encode objects in our sensory environment by varying the timing and number of action potentials that they emit. Brain networks that ‘decode’ this information need to partition those spike trains into their individual informative units. Experimenters achieve such partitioning by exploiting their knowledge about the millisecond precise timing of individual spikes relative to externally presented sensory stimuli. The brain, however, does not have access to this information and has to partition and decode spike trains using intrinsically available temporal reference frames. We show that slow (4–8 Hz) oscillatory network activity can provide such an intrinsic temporal reference. Specifically, we analyzed neural responses recorded in primary auditory and visual cortices. This revealed that the oscillatory reference frame performs nearly as well as the precise stimulus-locked reference frame and renders neural encoding robust to sensory noise and temporal uncertainty that naturally occurs during decoding. These findings provide a computational proof-of-concept that slow oscillatory network activity may serve the crucial function as temporal reference frame for sensory coding.
PMCID: PMC3469413  PMID: 23071429
23.  Coding of odors by temporal binding within a model network of the locust antennal lobe 
The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin–Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements.
PMCID: PMC3635028  PMID: 23630495
antennal lobe; temporal binding; computational neuroscience; odor coding; slow temporal patterns; oscillations; synchrony; time scales of inhibition
24.  Analyzing Short-Term Noise Dependencies of Spike-Counts in Macaque Prefrontal Cortex Using Copulas and the Flashlight Transformation 
PLoS Computational Biology  2009;5(11):e1000577.
Simultaneous spike-counts of neural populations are typically modeled by a Gaussian distribution. On short time scales, however, this distribution is too restrictive to describe and analyze multivariate distributions of discrete spike-counts. We present an alternative that is based on copulas and can account for arbitrary marginal distributions, including Poisson and negative binomial distributions as well as second and higher-order interactions. We describe maximum likelihood-based procedures for fitting copula-based models to spike-count data, and we derive a so-called flashlight transformation which makes it possible to move the tail dependence of an arbitrary copula into an arbitrary orthant of the multivariate probability distribution. Mixtures of copulas that combine different dependence structures and thereby model different driving processes simultaneously are also introduced. First, we apply copula-based models to populations of integrate-and-fire neurons receiving partially correlated input and show that the best fitting copulas provide information about the functional connectivity of coupled neurons which can be extracted using the flashlight transformation. We then apply the new method to data which were recorded from macaque prefrontal cortex using a multi-tetrode array. We find that copula-based distributions with negative binomial marginals provide an appropriate stochastic model for the multivariate spike-count distributions rather than the multivariate Poisson latent variables distribution and the often used multivariate normal distribution. The dependence structure of these distributions provides evidence for common inhibitory input to all recorded stimulus encoding neurons. Finally, we show that copula-based models can be successfully used to evaluate neural codes, e.g., to characterize stimulus-dependent spike-count distributions with information measures. This demonstrates that copula-based models are not only a versatile class of models for multivariate distributions of spike-counts, but that those models can be exploited to understand functional dependencies.
Author Summary
The brain has an enormous number of neurons that do not work alone but in an ensemble. Yet, mostly individual neurons were measured in the past and therefore models were restricted to independent neurons. With the advent of new multi-electrode techniques, however, it becomes possible to measure a great number of neurons simultaneously. As a result, models of how populations of neurons co-vary are becoming increasingly important. Here, we describe such a framework based on so-called copulas. Copulas allow to separate the neural variation structure of the population from the variability of the individual neurons. Contrary to standard models, versatile dependence structures can be described using this approach. We explore what additional information is provided by the detailed dependence. For simulated neurons, we show that the variation structure of the population allows inference of the underlying connectivity structure of the neurons. The power of the approach is demonstrated on a memory experiment in macaque monkey. We show that our framework describes the measurements better than the standard models and identify possible network connections of the measured neurons.
PMCID: PMC2776173  PMID: 19956759
25.  Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision 
Neural computation  2015;27(1):104-150.
The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis.
PMCID: PMC4457305  PMID: 25380339
jitter; maximum entropy; network; resampling; spike timing; spike train; synchrony

Results 1-25 (1325863)