Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for more than 85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.
neural coding; synaptic transmission; retinal ganglion cell; information theory; receptive field center; macaque
Recordings of local field potentials (LFPs) reveal that the sensory cortex displays rhythmic activity and fluctuations over a wide range of frequencies and amplitudes. Yet, the role of this kind of activity in encoding sensory information remains largely unknown. To understand the rules of translation between the structure of sensory stimuli and the fluctuations of cortical responses, we simulated a sparsely connected network of excitatory and inhibitory neurons modeling a local cortical population, and we determined how the LFPs generated by the network encode information about input stimuli. We first considered simple static and periodic stimuli and then naturalistic input stimuli based on electrophysiological recordings from the thalamus of anesthetized monkeys watching natural movie scenes. We found that the simulated network produced stimulus-related LFP changes that were in striking agreement with the LFPs obtained from the primary visual cortex. Moreover, our results demonstrate that the network encoded static input spike rates into gamma-range oscillations generated by inhibitory–excitatory neural interactions and encoded slow dynamic features of the input into slow LFP fluctuations mediated by stimulus–neural interactions. The model cortical network processed dynamic stimuli with naturalistic temporal structure by using low and high response frequencies as independent communication channels, again in agreement with recent reports from visual cortex responses to naturalistic movies. One potential function of this frequency decomposition into independent information channels operated by the cortical network may be that of enhancing the capacity of the cortical column to encode our complex sensory environment.
The brain displays rhythmic activity in almost all areas and over a wide range of frequencies and amplitudes. However, the role of these rhythms in the processing of sensory information is still unclear. To study the interplay between visual stimuli and ongoing oscillations in the brain, we developed a model of a local circuit of the visual cortex. We injected into the network the signal recorded in the thalamus of an anesthetized monkey watching a movie, to mimic the effect of a naturalistic stimulus arriving at the visual cortex. Our results are in striking agreement with recordings from the visual cortex. Furthermore, through manipulations of the signal and information analysis, we found that two specific frequency bands of the neurons' activity are used to encode independent stimuli features. These results describe how sensory stimuli can modulate frequency and amplitude of ongoing neural activity and how these modulations can be used to convey sensory information through the different layers of the brain.
Local field potential (LFP) oscillations are often accompanied by synchronization of activity within a widespread cerebral area. Thus, the LFP and neuronal coherence appear to be the result of a common mechanism that underlies neuronal assembly formation. We used the olfactory bulb as a model to investigate: (1) the extent to which unitary dynamics and LFP oscillations can be correlated and (2) the precision with which a model of the hypothesized underlying mechanisms can accurately explain the experimental data. For this purpose, we analyzed simultaneous recordings of mitral cell (MC) activity and LFPs in anesthetized and freely breathing rats in response to odorant stimulation. Spike trains were found to be phase-locked to the gamma oscillation at specific firing rates and to form odor-specific temporal patterns. The use of a conductance-based MC model driven by an approximately balanced excitatory-inhibitory input conductance and a relatively small inhibitory conductance that oscillated at the gamma frequency allowed us to provide one explanation of the experimental data via a mode-locking mechanism. This work sheds light on the way network and intrinsic MC properties participate in the locking of MCs to the gamma oscillation in a realistic physiological context and may result in a particular time-locked assembly. Finally, we discuss how a self-synchronization process with such entrainment properties can explain, under experimental conditions: (1) why the gamma bursts emerge transiently with a maximal amplitude position relative to the stimulus time course; (2) why the oscillations are prominent at a specific gamma frequency; and (3) why the oscillation amplitude depends on specific stimulus properties. We also discuss information processing and functional consequences derived from this mechanism.
Olfactory function relies on a chain of neural relays that extends from the periphery to the central nervous system and implies neural activity with various timescales. A central question in neuroscience is how information is encoded by the neural activity. In the mammalian olfactory bulb, local neural activity oscillations in the 40–80 Hz range (gamma) may influence the timing of individual neuron activities such that olfactory information may be encoded in this way. In this study, we first characterize in vivo the detailed activity of individual neurons relative to the oscillation and find that, depending on their state, neurons can exhibit periodic activity patterns. We also find, at least qualitatively, a relation between this activity and a particular odor. This is reminiscent of general physical phenomena—the entrainment by an oscillation—and to verify this hypothesis, in a second phase, we build a biologically realistic model mimicking these in vivo conditions. Our model confirms quantitatively this hypothesis and reveals that entrainment is maximal in the gamma range. Taken together, our results suggest that the neuronal activity may be specifically formatted in time during the gamma oscillation in such a way that it could, at this stage, encode the odor.
Relay cells in the mammalian lateral geniculate nucleus (LGN) are driven primarily by single retinal ganglion cells (RGCs). However, an LGN cell responds typically to less than half of the spikes it receives from the RGC that drives it, and without retinal drive the LGN is silent (Kaplan and Shapley, 1984). Recent studies, which used stimuli restricted to the receptive field (RF) center, show that despite the great loss of spikes, more than half of the information carried by the RGC discharge is typically preserved in the LGN discharge (Sincich et al., 2009), suggesting that the retinal spikes that are deleted by the LGN carry less information than those that are transmitted to the cortex. To determine how LGN relay neurons decide which retinal spikes to respond to, we recorded extracellularly from the cat LGN relay cell spikes together with the slow synaptic (‘S’) potentials that signal the firing of retinal spikes. We investigated the influence of the inhibitory surround of the LGN RF by stimulating the eyes with spots of various sizes, the largest of which covered the center and surround of the LGN relay cell's RF. We found that for stimuli that activated mostly the RF center, each LGN spike delivered more information than the retinal spike, but this difference was reduced as stimulus size increased to cover the RF surround. To evaluate the optimality of the LGN editing of retinal spikes, we created artificial spike trains from the retinal ones by various deletion schemes. We found that single LGN cells transmitted less information than an optimal detector could.
retina; information transmission; thalamus; stimulus size
In a wide range of studies, the emergence of orientation selectivity in primary visual cortex has been attributed to a complex interaction between feed-forward thalamic input and inhibitory mechanisms at the level of cortex. Although it is well known that layer 4 cortical neurons are highly sensitive to the timing of thalamic inputs, the role of the stimulus-driven timing of thalamic inputs in cortical orientation selectivity is not well understood. Here we show that the synchronization of thalamic firing contributes directly to the orientation tuned responses of primary visual cortex in a way that optimizes the stimulus information per cortical spike. From the recorded responses of geniculate X-cells in the anesthetized cat, we synthesized thalamic sub-populations that would likely serve as the synaptic input to a common layer 4 cortical neuron based on anatomical constraints. We used this synchronized input as the driving input to an integrate-and-fire model of cortical responses and demonstrated that the tuning properties match closely to those measured in primary visual cortex. By modulating the overall level of synchronization at the preferred orientation, we show that efficiency of information transmission in the cortex is maximized for levels of synchronization which match those reported in thalamic recordings in response to naturalistic stimuli, a property which is relatively invariant to the orientation tuning width. These findings indicate evidence for a more prominent role of the feed-forward thalamic input in cortical feature selectivity based on thalamic synchronization.
While the visual system is selective for a wide range of different inputs, orientation selectivity has been considered the preeminent property of the mammalian visual cortex. Existing models of this selectivity rely on varying relative importance of feedforward thalamic input and intracortical influence. Recently, we have shown that pairwise timing relationships between single thalamic neurons can be predictive of a high degree of orientation selectivity. Here we have constructed a computational model that predicts cortical orientation tuning from thalamic populations. We show that this arrangement, relying on precise timing differences between thalamic responses, accurately predicts tuning properties as well as demonstrates that certain timing relationships are optimal for transmitting information about the stimulus to cortex.
Sensory processing is associated with gamma frequency oscillations (30–80 Hz) in sensory cortices. This raises the question whether gamma oscillations can be directly involved in the representation of time-varying stimuli, including stimuli whose time scale is longer than a gamma cycle. We are interested in the ability of the system to reliably distinguish different stimuli while being robust to stimulus variations such as uniform time-warp. We address this issue with a dynamical model of spiking neurons and study the response to an asymmetric sawtooth input current over a range of shape parameters. These parameters describe how fast the input current rises and falls in time. Our network consists of inhibitory and excitatory populations that are sufficient for generating oscillations in the gamma range. The oscillations period is about one-third of the stimulus duration. Embedded in this network is a subpopulation of excitatory cells that respond to the sawtooth stimulus and a subpopulation of cells that respond to an onset cue. The intrinsic gamma oscillations generate a temporally sparse code for the external stimuli. In this code, an excitatory cell may fire a single spike during a gamma cycle, depending on its tuning properties and on the temporal structure of the specific input; the identity of the stimulus is coded by the list of excitatory cells that fire during each cycle. We quantify the properties of this representation in a series of simulations and show that the sparseness of the code makes it robust to uniform warping of the time scale. We find that resetting of the oscillation phase at stimulus onset is important for a reliable representation of the stimulus and that there is a tradeoff between the resolution of the neural representation of the stimulus and robustness to time-warp.
Sensory processing of time-varying stimuli, such as speech, is associated with high-frequency oscillatory cortical activity, the functional significance of which is still unknown. One possibility is that the oscillations are part of a stimulus-encoding mechanism. Here, we investigate a computational model of such a mechanism, a spiking neuronal network whose intrinsic oscillations interact with external input (waveforms simulating short speech segments in a single acoustic frequency band) to encode stimuli that extend over a time interval longer than the oscillation's period. The network implements a temporally sparse encoding, whose robustness to time warping and neuronal noise we quantify. To our knowledge, this study is the first to demonstrate that a biophysically plausible model of oscillations occurring in the processing of auditory input may generate a representation of signals that span multiple oscillation cycles.
Because of fast recovery from synaptic depression and fast-initiated action potentials, neuronal information transfer can have a substantially higher bandwidth in human neocortical circuits than in those of rodents.
Neuronal firing, synaptic transmission, and its plasticity form the building blocks for processing and storage of information in the brain. It is unknown whether adult human synapses are more efficient in transferring information between neurons than rodent synapses. To test this, we recorded from connected pairs of pyramidal neurons in acute brain slices of adult human and mouse temporal cortex and probed the dynamical properties of use-dependent plasticity. We found that human synaptic connections were purely depressing and that they recovered three to four times more swiftly from depression than synapses in rodent neocortex. Thereby, during realistic spike trains, the temporal resolution of synaptic information exchange in human synapses substantially surpasses that in mice. Using information theory, we calculate that information transfer between human pyramidal neurons exceeds that of mouse pyramidal neurons by four to nine times, well into the beta and gamma frequency range. In addition, we found that human principal cells tracked fine temporal features, conveyed in received synaptic inputs, at a wider bandwidth than for rodents. Action potential firing probability was reliably phase-locked to input transients up to 1,000 cycles/s because of a steep onset of action potentials in human pyramidal neurons during spike trains, unlike in rodent neurons. Our data show that, in contrast to the widely held views of limited information transfer in rodent depressing synapses, fast recovering synapses of human neurons can actually transfer substantial amounts of information during spike trains. In addition, human pyramidal neurons are equipped to encode high synaptic information content. Thus, adult human cortical microcircuits relay information at a wider bandwidth than rodent microcircuits.
Our ability to think, memorize information, and act appropriately depends on circuits of connected neurons in the brain. In these circuits, neurons pass information to each other using electric pulses (action potentials) that cause the release of chemical neurotransmitters, which alter the membrane electric potential of receiving neurons. Based on the inputs neurons receive, they decide whether to transmit action potentials to other neurons in the circuit to pass on information. During sequences of repeated information transfer, synaptic connections between two neurons temporarily become weaker by synaptic depression. Our knowledge of neuronal information transfer is based on rodent neurons. The properties of synaptic information transfer and synaptic depression in humans are not known. Here, we show that adult human neurons can transfer information with up to ten times higher rates than mouse neurons, because of a three to four times faster recovery from depression. Furthermore, we found that human neurons can respond faster to synaptic inputs, owing to faster initiation of action potentials. Human neurons can thereby reliably encode high input frequencies in their output. Thus, neuronal information transfer can have a substantially higher bandwidth in human neocortical circuits than in rodent brains.
Gamma-band peaks in the power spectrum of local field potentials (LFP) are found in multiple brain regions. It has been theorized that gamma oscillations may serve as a ’clock’ signal for the purposes of precise temporal encoding of information and ’binding’ of stimulus features across regions of the brain. Neurons in model networks may exhibit periodic spike firing or synchronized membrane potentials that give rise to a gamma-band oscillation that could operate as a ’clock’. The phase of the oscillation in such models is conserved over the length of the stimulus. We define these types of oscillations to be autocoherent. We investigated the hypothesis that autocoherent oscillations are the basis of the experimentally observed gamma-band peaks: the autocoherent oscillator (ACO) hypothesis. To test the ACO hypothesis, we developed a new analysis technique to analyze the autocoherence of a time-varying signal. This analysis used the continuous Gabor transform to examine the time evolution of the phase of each frequency component in the power spectrum. Using this analysis method, we formulated a statistical test to compare the ACO hypothesis with measurements of the LFP in macaque primary visual cortex, V1. The experimental data were not consistent with the ACO hypothesis. Gamma-band activity recorded in V1 did not have the properties of a ’clock’ signal during visual stimulation. We propose instead that the source of the gamma-band spectral peak is the resonant V1 network driven by random inputs.
Visual Cortex; Local Field Potential; extracellular; V1; time-frequency analysis; data analysis
While sensory neurons carry behaviorally relevant information in responses that often extend over hundreds of milliseconds, the key units of neural information likely consist of much shorter and temporally precise spike patterns. The mechanisms and temporal reference frames by which sensory networks partition responses into these shorter units of information remain unknown. One hypothesis holds that slow oscillations provide a network-intrinsic reference to temporally partitioned spike trains without exploiting the millisecond-precise alignment of spikes to sensory stimuli. We tested this hypothesis on neural responses recorded in visual and auditory cortices of macaque monkeys in response to natural stimuli. Comparing different schemes for response partitioning revealed that theta band oscillations provide a temporal reference that permits extracting significantly more information than can be obtained from spike counts, and sometimes almost as much information as obtained by partitioning spike trains using precisely stimulus-locked time bins. We further tested the robustness of these partitioning schemes to temporal uncertainty in the decoding process and to noise in the sensory input. This revealed that partitioning using an oscillatory reference provides greater robustness than partitioning using precisely stimulus-locked time bins. Overall, these results provide a computational proof of concept for the hypothesis that slow rhythmic network activity may serve as internal reference frame for information coding in sensory cortices and they foster the notion that slow oscillations serve as key elements for the computations underlying perception.
Neurons in sensory cortices encode objects in our sensory environment by varying the timing and number of action potentials that they emit. Brain networks that ‘decode’ this information need to partition those spike trains into their individual informative units. Experimenters achieve such partitioning by exploiting their knowledge about the millisecond precise timing of individual spikes relative to externally presented sensory stimuli. The brain, however, does not have access to this information and has to partition and decode spike trains using intrinsically available temporal reference frames. We show that slow (4–8 Hz) oscillatory network activity can provide such an intrinsic temporal reference. Specifically, we analyzed neural responses recorded in primary auditory and visual cortices. This revealed that the oscillatory reference frame performs nearly as well as the precise stimulus-locked reference frame and renders neural encoding robust to sensory noise and temporal uncertainty that naturally occurs during decoding. These findings provide a computational proof-of-concept that slow oscillatory network activity may serve the crucial function as temporal reference frame for sensory coding.
High-gamma (80–200 Hz) activity can be dissociated from gamma rhythms in
the monkey cortex, and appears largely to reflect spiking activity in the
vicinity of the electrode.
During cognitive tasks electrical activity in the brain shows changes in power in
specific frequency ranges, such as the alpha (8–12 Hz) or gamma
(30–80 Hz) bands, as well as in a broad range above ∼80 Hz, called the
high-gamma band. The role or significance of this broadband high-gamma activity
is unclear. One hypothesis states that high-gamma oscillations serve just like
gamma oscillations, operating at a higher frequency and consequently at a faster
timescale. Another hypothesis states that high-gamma power is related to spiking
activity. Because gamma power and spiking activity tend to co-vary during most
stimulus manipulations (such as contrast modulations) or cognitive tasks (such
as attentional modulation), it is difficult to dissociate these two hypotheses.
We studied the relationship between high-gamma power, gamma rhythm, and spiking
activity in the primary visual cortex (V1) of awake monkeys while varying the
stimulus size, which increased the gamma power but decreased the firing rate,
permitting a dissociation. We found that gamma power became anti-correlated with
the high-gamma power, suggesting that the two phenomena are distinct and have
different origins. On the other hand, high-gamma power remained tightly
correlated with spiking activity under a wide range of stimulus manipulations.
We studied this relationship using a signal processing technique called Matching
Pursuit and found that action potentials are associated with sharp transients in
the LFP with broadband power, which is visible at frequencies as low as ∼50
Hz. These results distinguish broadband high-gamma activity from gamma rhythms
as an easily obtained and reliable electrophysiological index of neuronal firing
near the microelectrode. Further, they highlight the importance of making a
careful dissociation between gamma rhythms and spike-related transients that
could be incorrectly decomposed as rhythms using traditional signal processing
Electrical activity in the brain often shows oscillations at distinct
frequencies, such as the alpha (8–12 Hz) or gamma (30–80 Hz) bands,
which have been linked with distinct cognitive states. In addition, changes in
power are seen in a broad range above ∼80 Hz, called the
“high-gamma” band. High-gamma power could arise either from
sustained oscillations (similar to gamma rhythms but operating at higher
frequencies) or from brief bursts of power associated with spikes generated near
the electrode (“spike bleed-through”). It is difficult to dissociate
these two hypotheses because gamma oscillations and spiking are correlated
during most stimulus or cognitive manipulations. Further, most signal processing
techniques decompose any signal into a set of oscillatory functions, making it
difficult to represent any transient power fluctuations that occur at the time
of spikes. We address the first issue by using a stimulus manipulation for which
gamma oscillations and spiking activity are anti-correlated, permitting
dissociation. To address the second issue, we use a signal processing technique
called Matching Pursuit, which is well suited to capture transient activity. We
show that gamma and high-gamma power become anti-correlated, suggesting
different biophysical origins. Spikes and high-gamma power, however, remain
tightly correlated. Broadband high-gamma activity could therefore be an easily
obtained and reliable electrophysiological index of neuronal firing in the
vicinity of an electrode.
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Cortical spike trains often appear noisy, with the timing and number of spikes varying across repetitions of stimuli. Spiking variability can arise from internal (behavioral state, unreliable neurons, or chaotic dynamics in neural circuits) and external (uncontrolled behavior or sensory stimuli) sources. The amount of irreducible internal noise in spike trains, an important constraint on models of cortical networks, has been difficult to estimate, since behavior and brain state must be precisely controlled or tracked. We recorded from excitatory barrel cortex neurons in layer 4 during active behavior, where mice control tactile input through learned whisker movements. Touch was the dominant sensorimotor feature, with >70% spikes occurring in millisecond timescale epochs after touch onset. The variance of touch responses was smaller than expected from Poisson processes, often reaching the theoretical minimum. Layer 4 spike trains thus reflect the millisecond-timescale structure of tactile input with little noise.
Cells called neurons connect to form large networks that process information in the brain. A region of the brain called the cerebral cortex receives information about touch from sensors in the skin. A series of neurons relay the touch information to the cerebral cortex as patterns of electrical activity called ‘spike trains’. Understanding how these spike trains represent information about the world around us is one of the greatest challenges facing neuroscience.
At first glance, the number and timing of the individual spikes within the trains appear to be random. It is possible that the irregularity within spike trains is ‘noise’ that is generated within the cortex itself. This noise could represent uncertainty about the nature of the stimulus from the sensors, or random fluctuations in brain activity. However, other findings have challenged this view and argued that these erratic spike trains actually carry hidden information.
Hires et al. investigated this possibility by recording how neurons within a region of the mouse brain called the somatosensory cortex responded to sensory information coming from the mouse's whiskers. Mice sweep their whiskers across objects to locate and identify them, much like how humans feel objects with their fingertips. Here, the mice used their whiskers to judge the location of an object by touch alone, while the electrical activity of the neurons was measured using electrodes. Importantly, the movements of the whiskers and contact with the object were tracked to one millisecond precision.
Similar to previous studies, sensory information from the whiskers triggered irregular spike trains in neurons within the somatosensory cortex. Hires et al. found that the apparently irregular spikes coincided precisely with the timing of when the whiskers contacted the object. Other spikes aligned perfectly with the movement of whiskers into particular positions. Furthermore, the patterns of electrical activity in the spike trains precisely predicted when and how the object was contacted, and which whisker was involved.
These findings suggest that the timing of individual spikes within spike trains carries important information to the brain. Future studies will develop our understanding of how the brain interprets and responds to the rich data contained in these spike trains to identify objects and decide how to interact with them.
sensory coding; somatosensation; barrel cortex; noise; mouse
The timing of spiking activity across neurons is a fundamental aspect of the neural population code. Individual neurons in the retina, thalamus, and cortex can have very precise and repeatable responses but exhibit degraded temporal precision in response to suboptimal stimuli. To investigate the functional implications for neural populations in natural conditions, we recorded in vivo the simultaneous responses, to movies of natural scenes, of multiple thalamic neurons likely converging to a common neuronal target in primary visual cortex. We show that the response of individual neurons is less precise at lower contrast, but that spike timing precision across neurons is relatively insensitive to global changes in visual contrast. Overall, spike timing precision within and across cells is on the order of 10 ms. Since closely timed spikes are more efficient in inducing a spike in downstream cortical neurons, and since fine temporal precision is necessary to represent the more slowly varying natural environment, we argue that preserving relative spike timing at a ∼10-ms resolution is a crucial property of the neural code entering cortex.
Neurons convey information about the world in the form of trains of action potentials (spikes). These trains are highly repeatable when the same stimulus is presented multiple times, and this temporal precision across repetitions can be as fine as a few milliseconds. It is usually assumed that this time scale also corresponds to the timing precision of several neighboring neurons firing in concert. However, the relative timing of spikes emitted by different neurons in a local population is not necessarily as fine as the temporal precision across repetitions within a single neuron. In the visual system of the brain, the level of contrast in the image entering the retina can affect single-neuron temporal precision, but the effects of contrast on the neural population code are unknown. Here we show that the temporal scale of the population code entering visual cortex is on the order of 10 ms and is largely insensitive to changes in visual contrast. Since closely timed spikes are more efficient in inducing a spike in downstream cortical neurons, and since fine temporal precision is necessary in representing the more slowly varying natural environment, preserving relative spike timing at a ∼10-ms resolution may be a crucial property of the neural code entering cortex.
Early neural representation of visual scenes occurs with a temporal precision on the order of 10 ms, which is precise enough to strongly drive downstream neurons in the visual pathway. Unlike individual neurons, the neural population code is largely insensitive to pronounced changes in visual contrast.
Relay cells are prevalent throughout sensory systems and receive two types of inputs: driving and modulating. The driving input contains receptive field properties that must be transmitted while the modulating input alters the specifics of transmission. For example, the visual thalamus contains relay neurons that receive driving inputs from the retina that encode a visual image, and modulating inputs from reticular activating system and layer 6 of visual cortex that control what aspects of the image will be relayed back to visual cortex for perception. What gets relayed depends on several factors such as attentional demands and a subject's goals. In this paper, we analyze a biophysical based model of a relay cell and use systems theoretic tools to construct analytic bounds on how well the cell transmits a driving input as a function of the neuron's electrophysiological properties, the modulating input, and the driving signal parameters. We assume that the modulating input belongs to a class of sinusoidal signals and that the driving input is an irregular train of pulses with inter-pulse intervals obeying an exponential distribution. Our analysis applies to any order model as long as the neuron does not spike without a driving input pulse and exhibits a refractory period. Our bounds on relay reliability contain performance obtained through simulation of a second and third order model, and suggest, for instance, that if the frequency of the modulating input increases or the DC offset decreases, then relay increases. Our analysis also shows, for the first time, how the biophysical properties of the neuron (e.g. ion channel dynamics) define the oscillatory patterns needed in the modulating input for appropriately timed relay of sensory information. In our discussion, we describe how our bounds predict experimentally observed neural activity in the basal ganglia in (i) health, (ii) in Parkinson's disease (PD), and (iii) in PD during therapeutic deep brain stimulation. Our bounds also predict different rhythms that emerge in the lateral geniculate nucleus in the thalamus during different attentional states.
In cellular biology, it is important to characterize the electrophysiological dynamics of a cell as a function of the cell type and its inputs. Typically, these dynamics are modeled as a set of parametric nonlinear ordinary differential equations which are not always easy to analyze. Previous studies performed phase-plane analysis and/or simulations to understand how constant inputs impact a cell's output for a given cell type. In this paper, we use systems theoretic tools to compute analytic bounds of how well a single neuron's output relays a driving input signal as a function of the neuron type, modulating input signal, and driving signal parameters. The methods used here are generally applicable to understanding cell behavior under various conditions and enables rigorous analysis of electrophysiological changes that occur in health and in disease.
Analyzing brain activity in songbirds suggests that the nervous system controls behavior by precisely modulating the timing pattern of electrical events.
Studies of motor control have almost universally examined firing rates to investigate how the brain shapes behavior. In principle, however, neurons could encode information through the precise temporal patterning of their spike trains as well as (or instead of) through their firing rates. Although the importance of spike timing has been demonstrated in sensory systems, it is largely unknown whether timing differences in motor areas could affect behavior. We tested the hypothesis that significant information about trial-by-trial variations in behavior is represented by spike timing in the songbird vocal motor system. We found that neurons in motor cortex convey information via spike timing far more often than via spike rate and that the amount of information conveyed at the millisecond timescale greatly exceeds the information available from spike counts. These results demonstrate that information can be represented by spike timing in motor circuits and suggest that timing variations evoke differences in behavior.
A central question in neuroscience is how neurons use patterns of electrical events to represent sensory information and control behavior. Neurons might use two different codes to transmit information. First, signals might be conveyed by the total number of electrical events (called “action potentials”) that a neuron produces. Alternately, the timing pattern of action potentials, as distinct from the total number of action potentials produced, might be used to transmit information. Although many studies have shown that timing can convey information about sensory inputs, such as visual scenery or sound waveforms, the role of action potential timing in the control of complex, learned behaviors is largely unknown. Here, by analyzing the pattern of action potentials produced in a songbird's brain as it precisely controls vocal behavior, we demonstrate that far more information about upcoming behavior is present in spike timing than in the total number of spikes fired. This work suggests that timing can be equally (or more) important in motor systems as in sensory systems.
Synchronized oscillation is very commonly observed in many neuronal systems and
might play an important role in the response properties of the system. We have
studied how the spontaneous oscillatory activity affects the responsiveness of a
neuronal network, using a neural network model of the visual cortex built from
Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the
isotropic local E-I and I-E synaptic connections were sufficiently strong, the
network commonly generated gamma frequency oscillatory firing patterns in
response to random feed-forward (FF) input spikes. This spontaneous oscillatory
network activity injects a periodic local current that could amplify a weak
synaptic input and enhance the network's responsiveness. When E-E
connections were added, we found that the strength of oscillation can be
modulated by varying the FF input strength without any changes in single neuron
properties or interneuron connectivity. The response modulation is proportional
to the oscillation strength, which leads to self-regulation such that the
cortical network selectively amplifies various FF inputs according to its
strength, without requiring any adaptation mechanism. We show that this
selective cortical amplification is controlled by E-E cell interactions. We also
found that this response amplification is spatially localized, which suggests
that the responsiveness modulation may also be spatially selective. This
suggests a generalized mechanism by which neural oscillatory activity can
enhance the selectivity of a neural network to FF inputs.
In the nervous system, information is delivered and processed digitally via
voltage spikes transmitted between cells. A neural system is characterized by
its input/output spike signal patterns. Generally, a network of neurons shows a
very different response pattern than that of a single neuron. In some cases, a
neural network generates interesting population activities, such as synchronized
oscillations, which are thought to modulate the response properties of the
network. However, the exact role of these neural oscillations is unknown. We
investigated the relationship between the oscillatory activity and the response
modulation in neural networks using computational simulation modeling. We found
that the response of the system is significantly modified by the oscillations in
the network. In particular, the responsiveness to weak inputs is remarkably
enhanced. This suggests that the oscillation can differentially amplify sensory
information depending on the input signal conditions. We conclude that a neural
network can dynamically modify its response properties by the selective
amplification of sensory signals due to oscillation activity, which may explain
some experimental observations and help us to better understand neural
GABAergic interneurons (INs) in the dorsal lateral geniculate nucleus (dLGN) shape the information flow from retina to cortex, presumably by controlling the number of visually evoked spikes in geniculate thalamocortical (TC) neurons, and refining their receptive field. The INs exhibit a rich variety of firing patterns: Depolarizing current injections to the soma may induce tonic firing, periodic bursting or an initial burst followed by tonic spiking, sometimes with prominent spike-time adaptation. When released from hyperpolarization, some INs elicit rebound bursts, while others return more passively to the resting potential. A full mechanistic understanding that explains the function of the dLGN on the basis of neuronal morphology, physiology and circuitry is currently lacking. One way to approach such an understanding is by developing a detailed mathematical model of the involved cells and their interactions. Limitations of the previous models for the INs of the dLGN region prevent an accurate representation of the conceptual framework needed to understand the computational properties of this region. We here present a detailed compartmental model of INs using, for the first time, a morphological reconstruction and a set of active dendritic conductances constrained by experimental somatic recordings from INs under several different current-clamp conditions. The model makes a number of experimentally testable predictions about the role of specific mechanisms for the firing properties observed in these neurons. In addition to accounting for the significant features of all experimental traces, it quantitatively reproduces the experimental recordings of the action-potential- firing frequency as a function of injected current. We show how and why relative differences in conductance values, rather than differences in ion channel composition, could account for the distinct differences between the responses observed in two different neurons, suggesting that INs may be individually tuned to optimize network operation under different input conditions.
The dorsal lateral geniculate nucleus (dLGN) is a part of the visual thalamus. This region contains two types of neurons: thalamocortical neurons and local interneurons. Thalamocortical neurons receive information from the retina and transmit information to visual cortex. The interneurons regulate the activity of thalamocortical neurons through inhibitory connections. This regulation is not properly understood, but it is believed to promote contrast enhancement and other vital visual functions. A powerful tool for development of a mechanistic understanding of dLGN functions is computer models that include the involved neurons, their interconnections and their interactions. Quite sophisticated models are available for thalamocortical neurons, but previous interneuron models are too simple for adequate mechanistic understanding of the functional properties of interneurons. We here present a detailed compartmental interneuron-model based on experimental data. The typical response patterns vary between different interneurons, but also within a given neuron, depending on the stimulus it receives. The model identifies a set of ionic mechanisms that can explain this diversity of activity patterns. In addition to being a useful building block for future network simulations of the dLGN, the model gives useful insight into the operating principles of dLGN interneurons.
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Spontaneous retinal activity (known as “waves”) remodels synaptic connectivity to the lateral geniculate nucleus (LGN) during development. Analysis of retinal waves recorded with multielectrode arrays in mouse suggested that a cue for the segregation of functionally distinct (ON and OFF) retinal ganglion cells (RGCs) in the LGN may be a desynchronization in their firing, where ON cells precede OFF cells by one second. Using the recorded retinal waves as input, with two different modeling approaches we explore timing-based plasticity rules for the evolution of synaptic weights to identify key features underlying ON/OFF segregation. First, we analytically derive a linear model for the evolution of ON and OFF weights, to understand how synaptic plasticity rules extract input firing properties to guide segregation. Second, we simulate postsynaptic activity with a nonlinear integrate-and-fire model to compare findings with the linear model. We find that spike-time-dependent plasticity, which modifies synaptic weights based on millisecond-long timing and order of pre- and postsynaptic spikes, fails to segregate ON and OFF retinal inputs in the absence of normalization. Implementing homeostatic mechanisms results in segregation, but only with carefully-tuned parameters. Furthermore, extending spike integration timescales to match the second-long input correlation timescales always leads to ON segregation because ON cells fire before OFF cells. We show that burst-time-dependent plasticity can robustly guide ON/OFF segregation in the LGN without normalization, by integrating pre- and postsynaptic bursts irrespective of their firing order and over second-long timescales. We predict that an LGN neuron will become ON- or OFF-responsive based on a local competition of the firing patterns of neighboring RGCs connecting to it. Finally, we demonstrate consistency with ON/OFF segregation in ferret, despite differences in the firing properties of retinal waves. Our model suggests that diverse input statistics of retinal waves can be robustly interpreted by a burst-based rule, which underlies retinogeniculate plasticity across different species.
Many central targets in the brain are involved in the processing of information from the outside world. Before information about the visual scene reaches the visual cortex, it is preprocessed in the retina and the lateral geniculate nucleus. Connections which relay this information between the different brain targets are not determined at birth, but undergo a developmental period during which they are guided by molecular cues to the correct locations, and refined by activity to the appropriate numbers and strengths. Before the onset of vision, spontaneous activity generated within the retina plays an important role in the remodeling of these connections. In a computational and theoretical model, we used recorded spontaneous retinal activity patterns with several plasticity rules at the retinogeniculate synapse to identify the key properties underlying the selective refinement of connections. Our model shows robust behavior when applied to both mouse and ferret data, demonstrating that a common plasticity rule across species may underlie synaptic refinements in the visual system driven by spontaneous retinal activity.
All visual signals the cortex receives are influenced by the perigeniculate sector (PGN) of the thalamic reticular nucleus, which receives input from relay cells in the lateral geniculate and provides feedback inhibition in return. Relay cells have been studied in quantitative depth; they behave in a roughly linear fashion and have receptive fields with a stereotyped center-surround structure. We know far less about reticular neurons. Qualitative studies indicate they simply pool ascending input to generate non-selective gain control. Yet the perigeniculate is complicated; local cells are densely interconnected and fire lengthy bursts. Thus, we employed quantitative methods to explore the perigeniculate using relay cells as controls. By adapting methods of spike-triggered averaging and covariance analysis for bursts, we identified both first and second order features that build reticular receptive fields. The shapes of these spatiotemporal subunits varied widely; no stereotyped pattern emerged. Companion experiments showed that the shape of the first but not second order features could be explained by the overlap of On and Off inputs to a given cell. Moreover, we assessed the predictive power of the receptive field and how much information each component subunit conveyed. Linear-non-linear (LN) models including multiple subunits performed better than those made with just one; further each subunit encoded different visual information. Model performance for reticular cells was always lesser than for relay cells, however, indicating that reticular cells process inputs non-linearly. All told, our results suggest that the perigeniculate encodes diverse visual features to selectively modulate activity transmitted downstream.
LGN; TRN; inhibition; receptive field; thalamus
Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns.
Our ability to perceive the world is dependent on information from our senses being passed between different parts of the brain. The information is encoded as patterns of electrical pulses or ‘spikes’, which other brain regions must be able to decipher. Cracking this code would thus enable us to predict the patterns of nerve impulses that would occur in response to specific stimuli, and ‘decode’ which stimuli had produced particular patterns of impulses.
This task is challenging in part because of its scale—vast numbers of stimuli are encoded by huge numbers of neurons that can send their spikes in many different combinations. Furthermore, neurons are inherently noisy and their response to identical stimuli may vary considerably in the number of spikes and their timing. This means that the brain cannot simply link a single unchanging pattern of firing with each stimulus, because these firing patterns are often distorted by biophysical noise.
Ganmor et al. have now modeled the effects of noise in a network of neurons in the retina (found at the back of the eye), and, in doing so, have provided insights into how the brain solves this problem. This has brought us a step closer to cracking the neural code. First, 10 second video clips of natural scenes and artificial stimuli were played on a loop to a sample of retina taken from a salamander, and the responses of nearly 100 neurons in the sample were recorded for two hours. Dividing the 10 second clip into short segments provided a series of 500 stimuli, which the network had been exposed to more than 600 times.
Ganmor et al. analyzed the responses of groups of 20 cells to each stimulus and found that physically similar firing patterns were not particularly likely to encode the same stimulus. This can be likened to the way that words such as ‘light’ and ‘night’ have similar structures but different meanings. Instead, the model reveals that each stimulus was represented by a cluster of firing patterns that bore little physical resemblance to one another, but which nevertheless conveyed the same meaning. To continue on with the previous example, this is similar to way that ‘light’ and ‘illumination’ have the same meaning but different structures.
Ganmor et al. use these new data to map the organization of the ‘vocabulary’ of populations of cells the retina, and put together a kind of ‘thesaurus’ that enables new activity patterns of the retina to be decoded and could be used to crack the neural code. Furthermore, the organization of ‘synonyms’ is strikingly similar to codes that are favored in many forms of telecommunication. In these man-made codes, codewords that represent different items are chosen to be so distinct from each other that even if they were corrupted by noise, they could be correctly deciphered. Correspondingly, in the retina, patterns that carry the same meaning occupy a distinct area, and new patterns can be interpreted based on their proximity to these clusters.
neural code; information; noise; entropy; natural stimuli; metric; retina; salamander
The temporal dynamics of inhibition within a neural network is a crucial determinant of information processing. Here, the authors describe in the visual thalamus how neuromodulation governs the magnitude and time course of inhibition in an input-dependent way.
In many brain regions, inhibition is mediated by numerous classes of specialized interneurons, but within the rodent dorsal lateral geniculate nucleus (dLGN), a single class of interneuron is present. dLGN interneurons inhibit thalamocortical (TC) neurons and regulate the activity of TC neurons evoked by retinal ganglion cells (RGCs), thereby controlling the visually evoked signals reaching the cortex. It is not known whether neuromodulation can regulate interneuron firing mode and the resulting inhibition. Here, we examine this in brain slices. We find that cholinergic modulation regulates the output mode of these interneurons and controls the resulting inhibition in a manner that is dependent on the level of afferent activity. When few RGCs are activated, acetylcholine suppresses synaptically evoked interneuron spiking, and strongly reduces disynaptic inhibition. In contrast, when many RGCs are coincidently activated, single stimuli promote the generation of a calcium spike, and stimulation with a brief train evokes prolonged plateau potentials lasting for many seconds that in turn lead to sustained inhibition. These findings indicate that cholinergic modulation regulates feedforward inhibition in a context-dependent manner.
Within the visual thalamus, a single type of inhibitory interneuron regulates activity evoked by retinal ganglion cells and controls the visual signals that reach the cortex. Here, we find that neuromodulation, of the sort thought to occur when an animal is attending to a task, regulates the firing mode of these interneurons and controls the resulting inhibition in an input-dependent manner. When few ganglion cells are activated, neuromodulation greatly decreases the number of spikes in interneurons, and as a result, strongly reduces the inhibition of relay neurons. This favors the lossless transmission of weak visual signals to the cortex by virtually eliminating inhibition within the thalamus. In contrast, when many ganglion cells are activated, the same neuromodulator leads to strong and prolonged inhibition. This is accomplished by promoting the generation of calcium spikes and prolonged depolarizations in interneurons. In this way, a modulator can regulate the flow of visual information in a context-dependent manner.
Fast-spiking (FS) cells in the neocortex are interconnected both by inhibitory chemical synapses and by electrical synapses, or gap-junctions. Synchronized firing of FS neurons is important in the generation of gamma oscillations, at frequencies between 30 and 80 Hz. To understand how these synaptic interactions control synchronization, artificial synaptic conductances were injected in FS cells, and the synaptic phase-resetting function (SPRF), describing how the compound synaptic input perturbs the phase of gamma-frequency spiking as a function of the phase at which it is applied, was measured. GABAergic and gap junctional conductances made distinct contributions to the SPRF, which had a surprisingly simple piecewise linear form, with a sharp midcycle break between phase delay and advance. Analysis of the SPRF showed how the intrinsic biophysical properties of FS neurons and their interconnections allow entrainment of firing over a wide gamma frequency band, whose upper and lower frequency limits are controlled by electrical synapses and GABAergic inhibition respectively.
Oscillations of the electrical field in the brain at 30–80 Hz (gamma oscillations) reflect coordinated firing of neurons during cognitive, sensory, and motor activity, and are thought to be a key phenomenon in the organization of neural processing in the cortex. Synchronous firing of a particular type of neuron, the inhibitory fast-spiking (FS) cell, imposes the gamma rhythm on other cells in the network. FS cells are highly interconnected by both gap junctions and chemical inhibition. In this study, we probed FS cells with a synthetic conductance stimulus which mimics the electrical effect of these complex connections in a controlled way, and directly measured how the timing of their firing should be affected by nearby FS neighbours. We were able to fit a mathematically simple but accurate model to these measurements, the “synaptic phase-resetting function”, which predicts how FS neurons synchronize at different frequencies, noise levels, and synaptic connection strengths. This model gives us deeper insight into how the FS cells synchronize so effectively at gamma oscillations, and will be a building-block in large-scale simulations of the FS cell network aimed at understanding the onset and stability of patterns of gamma oscillation in the cortex.
Understanding how neural and behavioral timescales interact to influence cortical activity and stimulus coding is an important issue in sensory neuroscience. In air-breathing animals, voluntary changes in respiratory frequency alter the temporal patterning olfactory input. In the olfactory bulb, these behavioral timescales are reflected in the temporal properties of mitral/tufted (M/T) cell spike trains. As the odor information contained in these spike trains is relayed from the bulb to the cortex, interactions between presynaptic spike timing and short-term synaptic plasticity dictate how stimulus features are represented in cortical spike trains. Here we demonstrate how the timescales associated with respiratory frequency, spike timing and short-term synaptic plasticity interact to shape cortical responses. Specifically, we quantified the timescales of short-term synaptic facilitation and depression at excitatory synapses between bulbar M/T cells and cortical neurons in slices of mouse olfactory cortex. We then used these results to generate simulated M/T population synaptic currents that were injected into real cortical neurons. M/T population inputs were modulated at frequencies consistent with passive respiration or active sniffing. We show how the differential recruitment of short-term plasticity at breathing versus sniffing frequencies alters cortical spike responses. For inputs at sniffing frequencies, cortical neurons linearly encoded increases in presynaptic firing rates with increased phase locked, firing rates. In contrast, at passive breathing frequencies, cortical responses saturated with changes in presynaptic rate. Our results suggest that changes in respiratory behavior can gate the transfer of stimulus information between the olfactory bulb and cortex.
Thalamic relay cells transmit information from retina to cortex by firing either rapid bursts or tonic trains of spikes. Bursts occur when the membrane voltage is low, as during sleep, because they depend on channels that cannot respond to excitatory input unless they are primed by strong hyperpolarization. Cells fire tonically when depolarized, as during waking. Thus, mode of firing is usually associated with behavioral state. Growing evidence, however, suggests that sensory processing involves both burst and tonic spikes. To ask if visually evoked synaptic responses induce each type of firing, we recorded intracellular responses to natural movies from relay cells and developed methods to map the receptive fields of the excitation and inhibition that the images evoked. In addition to tonic spikes, the movies routinely elicited lasting inhibition from the center of the receptive field that permitted bursts to fire. Therefore, naturally evoked patterns of synaptic input engage dual modes of firing.