The conversion of an analog stimulus into the digital form of spikes is a fundamental step in encoding sensory information. Here, we investigate this transformation in the visual system of fish by in vivo calcium imaging and electrophysiology of retinal bipolar cells, which have been assumed to be purely graded neurons.
Synapses of all major classes of retinal bipolar cell encode visual information by using a combination of spikes and graded signals. Spikes are triggered within the synaptic terminal and, although sparse, phase-lock to a stimulus with a jitter as low as 2–3 ms. Spikes in bipolar cells encode a visual stimulus less reliably than spikes in ganglion cells but with similar temporal precision. The spike-generating mechanism does not alter the temporal filtering of a stimulus compared with the generator potential. The amplitude of the graded component of the presynaptic calcium signal can vary in time, and small fluctuations in resting membrane potential alter spike frequency and even switch spiking on and off.
In the retina of fish, the millisecond precision of spike coding begins in the synaptic terminal of bipolar cells. This neural compartment regulates the frequency of digital signals transmitted to the inner retina as well as the strength of graded signals.
► The spike code of vision begins in retinal bipolar cells ► Spikes in bipolar cells phase-lock to visual stimuli with millisecond precision ► Spiking and graded calcium signals can switch on and off at individual synapses ► Spikes in bipolar cells encode a stimulus less reliably than spikes in ganglion cells
The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as both temporally and spatially localized. Under this localist account, neurons compute near-instantaneous mappings from their current input to their current output, brought about by somatic summation of dendritic contributions that are generated in functionally segregated compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought; notably that local dendritic activity may be a mechanism for generating on-going whole-cell voltage oscillations.
A central issue in biology is how local processes yield global consequences. This is especially relevant for neurons since these spatially extended cells process local synaptic inputs to generate global action potential output. The dendritic tree of a neuron, which receives most of the inputs, expresses ion channels that can generate nonlinear dynamics. A prominent phenomenon resulting from such ion channels are voltage oscillations. The distribution of the active membrane channels throughout the cell is often highly non-uniform. This can turn the dendritic tree into a network of sparsely spaced local oscillators. Here we analyze whether local dendritic oscillators can produce cell-wide voltage oscillations. Our mathematical theory shows that indeed even when the dendritic oscillators are weakly coupled, they lock their phases and give global oscillations. We show how the biophysical properties of the dendrites determine the global locking and how it can be controlled by synaptic inputs. As a consequence of global locking, even individual synaptic inputs can affect the timing of action potentials. In fact, dendrites locking in synchrony can lead to sustained firing of the cell. We show that dendritic trees can be bistable, with dendrites locking in either synchrony or asynchrony, which may provide a novel mechanism for single cell-based memory.
A wide variety of neurons encode temporal information via phase-locked spikes. In the avian auditory brainstem, neurons in the cochlear nucleus magnocellularis (NM) send phase-locked synaptic inputs to coincidence detector neurons in the nucleus laminaris (NL) that mediate sound localization. Previous modeling studies suggested that converging phase-locked synaptic inputs may give rise to a periodic oscillation in the membrane potential of their target neuron. Recent physiological recordings in vivo revealed that owl NL neurons changed their spike rates almost linearly with the amplitude of this oscillatory potential. The oscillatory potential was termed the sound analog potential, because of its resemblance to the waveform of the stimulus tone. The amplitude of the sound analog potential recorded in NL varied systematically with the interaural time difference (ITD), which is one of the most important cues for sound localization. In order to investigate the mechanisms underlying ITD computation in the NM-NL circuit, we provide detailed theoretical descriptions of how phase-locked inputs form oscillating membrane potentials. We derive analytical expressions that relate presynaptic, synaptic, and postsynaptic factors to the signal and noise components of the oscillation in both the synaptic conductance and the membrane potential. Numerical simulations demonstrate the validity of the theoretical formulations for the entire frequency ranges tested (1–8 kHz) and potential effects of higher harmonics on NL neurons with low best frequencies (<2 kHz).
phase-locking; sound localization; auditory brainstem; periodic signals; oscillation; owl
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
Cerebellar Purkinje cells display complex intrinsic dynamics. They fire spontaneously, exhibit bistability, and via mutual network interactions are involved in the generation of high frequency oscillations and travelling waves of activity. To probe the dynamical properties of Purkinje cells we measured their phase response curves (PRCs). PRCs quantify the change in spike phase caused by a stimulus as a function of its temporal position within the interspike interval, and are widely used to predict neuronal responses to more complex stimulus patterns. Significant variability in the interspike interval during spontaneous firing can lead to PRCs with a low signal-to-noise ratio, requiring averaging over thousands of trials. We show using electrophysiological experiments and simulations that the PRC calculated in the traditional way by sampling the interspike interval with brief current pulses is biased. We introduce a corrected approach for calculating PRCs which eliminates this bias. Using our new approach, we show that Purkinje cell PRCs change qualitatively depending on the firing frequency of the cell. At high firing rates, Purkinje cells exhibit single-peaked, or monophasic PRCs. Surprisingly, at low firing rates, Purkinje cell PRCs are largely independent of phase, resembling PRCs of ideal non-leaky integrate-and-fire neurons. These results indicate that Purkinje cells can act as perfect integrators at low firing rates, and that the integration mode of Purkinje cells depends on their firing rate.
By observing how brief current pulses injected at different times between spikes change the phase of spiking of a neuron (and thus obtaining the so-called phase response curve), it should be possible to predict a full spike train in response to more complex stimulation patterns. When we applied this traditional protocol to obtain phase response curves in cerebellar Purkinje cells in the presence of noise, we observed a triangular region devoid of data points near the end of the spiking cycle. This “Bermuda Triangle” revealed a flaw in the classical method for constructing phase response curves. We developed a new approach to eliminate this flaw and used it to construct phase response curves of Purkinje cells over a range of spiking rates. Surprisingly, at low firing rates, phase changes were independent of the phase of the injected current pulses, implying that the Purkinje cell is a perfect integrator under these conditions. This mechanism has not yet been described in other cell types and may be crucial for the information processing capabilities of these neurons.
In somatosensory cortex, stimulus amplitude is represented at a relatively coarse temporal resolution, while stimulus frequency is represented by precisely timed action potentials.
Our ability to perceive and discriminate textures relies on the transduction and processing of complex, high-frequency vibrations elicited in the fingertip as it is scanned across a surface. How naturalistic vibrations, and by extension texture, are encoded in the responses of neurons in primary somatosensory cortex (S1) is unknown. Combining single unit recordings in awake macaques and perceptual judgments obtained from human subjects, we show that vibratory amplitude is encoded in the strength of the response evoked in S1 neurons. In contrast, the frequency composition of the vibrations, up to 800 Hz, is not encoded in neuronal firing rates, but rather in the phase-locked responses of a subpopulation of neurons. Moreover, analysis of perceptual judgments suggests that spike timing not only conveys stimulus information but also shapes tactile perception. We conclude that information about the amplitude and frequency of natural vibrations is multiplexed at different time scales in S1, and encoded in the rate and temporal patterning of the response, respectively.
When we slide our fingertip across a textured surface, small, complex, and high-frequency vibrations are elicited in the skin and our nervous system extracts information about texture from these vibrations. In this study, we investigate how texture-like vibrations are processed in primary somatosensory cortex (S1). First, we show that the time-varying amplitude of skin vibrations is encoded in the time-varying response rates of a subpopulation of S1 neurons. Second, we show that this same subpopulation of S1 neurons produces responses whose timing closely matches that of the vibrations: The frequency composition of the spiking patterns matches that of the stimulus, even for complex vibrations. We demonstrate that this temporal precision is behaviorally relevant by showing that the tactile perception of vibration is better predicted from neuronal responses when spike timing is taken into consideration than when it is not. The activity of S1 neurons is thus multiplexed at different time scales: Stimulus amplitude, which changes relatively slowly, is represented at a relatively coarse temporal resolution, while stimulus frequency is represented by precisely timed action potentials.
The nature of the neural codes for pitch and loudness, two basic auditory attributes, has been a key question in neuroscience for over century. A currently widespread view is that sound intensity (subjectively, loudness) is encoded in spike rates, whereas sound frequency (subjectively, pitch) is encoded in precise spike timing. Here, using information-theoretic analyses, we show that the spike rates of a population of virtual neural units with frequency-tuning and spike-count correlation characteristics similar to those measured in the primary auditory cortex of primates, contain sufficient statistical information to account for the smallest frequency-discrimination thresholds measured in human listeners. The same population, and the same spike-rate code, can also account for the intensity-discrimination thresholds of humans. These results demonstrate the viability of a unified rate-based cortical population code for both sound frequency (pitch) and sound intensity (loudness), and thus suggest a resolution to a long-standing puzzle in auditory neuroscience.
A widely held view among auditory scientists is that the neural code for sound intensity (or loudness) involves temporally coarse spike-rate information, whereas the code for sound frequency (or pitch) requires more fine-grained and precise spike timing information. One problem with this view is that neurons in auditory cortex do not produce precisely time-locked responses to higher frequencies within the pitch range, suggesting that a transformation to a rate code must occur. However, because cortical neurons exhibit relatively broad tuning to frequency and correlated spike counts, it is unclear whether a cortical population code based on spike rates alone can support the remarkably precise pitch-discrimination ability of humans. Here we show that a relatively small population of virtual neurons with frequency-tuning and spike-count correlation characteristics consistent with those of actual neurons in the primary auditory cortex of primates, can account for both the smallest frequency- and intensity-discrimination thresholds measured behaviorally in humans. These results suggest a resolution to a long-standing puzzle in auditory neuroscience.
While sensory neurons carry behaviorally relevant information in responses that often extend over hundreds of milliseconds, the key units of neural information likely consist of much shorter and temporally precise spike patterns. The mechanisms and temporal reference frames by which sensory networks partition responses into these shorter units of information remain unknown. One hypothesis holds that slow oscillations provide a network-intrinsic reference to temporally partitioned spike trains without exploiting the millisecond-precise alignment of spikes to sensory stimuli. We tested this hypothesis on neural responses recorded in visual and auditory cortices of macaque monkeys in response to natural stimuli. Comparing different schemes for response partitioning revealed that theta band oscillations provide a temporal reference that permits extracting significantly more information than can be obtained from spike counts, and sometimes almost as much information as obtained by partitioning spike trains using precisely stimulus-locked time bins. We further tested the robustness of these partitioning schemes to temporal uncertainty in the decoding process and to noise in the sensory input. This revealed that partitioning using an oscillatory reference provides greater robustness than partitioning using precisely stimulus-locked time bins. Overall, these results provide a computational proof of concept for the hypothesis that slow rhythmic network activity may serve as internal reference frame for information coding in sensory cortices and they foster the notion that slow oscillations serve as key elements for the computations underlying perception.
Neurons in sensory cortices encode objects in our sensory environment by varying the timing and number of action potentials that they emit. Brain networks that ‘decode’ this information need to partition those spike trains into their individual informative units. Experimenters achieve such partitioning by exploiting their knowledge about the millisecond precise timing of individual spikes relative to externally presented sensory stimuli. The brain, however, does not have access to this information and has to partition and decode spike trains using intrinsically available temporal reference frames. We show that slow (4–8 Hz) oscillatory network activity can provide such an intrinsic temporal reference. Specifically, we analyzed neural responses recorded in primary auditory and visual cortices. This revealed that the oscillatory reference frame performs nearly as well as the precise stimulus-locked reference frame and renders neural encoding robust to sensory noise and temporal uncertainty that naturally occurs during decoding. These findings provide a computational proof-of-concept that slow oscillatory network activity may serve the crucial function as temporal reference frame for sensory coding.
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
As every dog owner knows, animals repeat behaviors that earn them rewards. But what is the brain machinery that underlies this reward-based learning? Experimental research points to plasticity of the synaptic connections between neurons, with an important role played by the neuromodulator dopamine, but the exact way synaptic activity and neuromodulation interact during learning is not precisely understood. Here we propose a model explaining how reward signals might interplay with synaptic plasticity, and use the model to solve a simulated maze navigation task. Our model extends an idea from the theory of reinforcement learning: one group of neurons form an “actor,” responsible for choosing the direction of motion of the animal. Another group of neurons, the “critic,” whose role is to predict the rewards the actor will gain, uses the mismatch between actual and expected reward to teach the synapses feeding both groups. Our learning agent learns to reliably navigate its maze to find the reward. Remarkably, the synaptic learning rule that we derive from theoretical considerations is similar to previous rules based on experimental evidence.
In active networks, excitatory and inhibitory synaptic inputs generate membrane voltage fluctuations that drive spike activity in a probabilistic manner. Despite this, some cells in vivo show a strong propensity to precisely lock to the local field potential and maintain a specific spike-phase relationship relative to other cells. In recordings from rat medial entorhinal cortical stellate cells, we measured spike phase-locking in response to sinusoidal “test” inputs in the presence of different forms of background membrane voltage fluctuations, generated via dynamic clamp. We find that stellate cells show strong and robust spike phase-locking to theta (4–12 Hz) inputs. This response occurs under a wide variety of background membrane voltage fluctuation conditions that include a substantial increase in overall membrane conductance. Furthermore, the IH current present in stellate cells is critical to the enhanced spike phase-locking response at theta. Finally, we show that correlations between inhibitory and excitatory conductance fluctuations, which can arise through feed-back and feed-forward inhibition, can substantially enhance the spike phase-locking response. The enhancement in locking is a result of a selective reduction in the size of low frequency membrane voltage fluctuations due to cancelation of inhibitory and excitatory current fluctuations with correlations. Hence, our results demonstrate that stellate cells have a strong preference for spike phase-locking to theta band inputs and that the absolute magnitude of locking to theta can be modulated by the properties of background membrane voltage fluctuations.
synaptic correlations; high conductance; theta; IH; voltage fluctuations; balanced excitation and inhibition; spike phase-locking
We used phase resetting methods to predict firing patterns of rat subthalamic nucleus (STN) neurons when their rhythmic firing was densely perturbed by noise. We applied sequences of contiguous brief (0.5–2 ms) current pulses with amplitudes drawn from a Gaussian distribution (10–100 pA standard deviation) to autonomously firing STN neurons in slices. Current noise sequences increased the variability of spike times with little or no effect on the average firing rate. We measured the infinitesimal phase resetting curve (PRC) for each neuron using a noise-based method. A phase model consisting of only a firing rate and PRC was very accurate at predicting spike timing, accounting for more than 80% of spike time variance and reliably reproducing the spike-to-spike pattern of irregular firing. An approximation for the evolution of phase was used to predict the effect of firing rate and noise parameters on spike timing variability. It quantitatively predicted changes in variability of interspike intervals with variation in noise amplitude, pulse duration and firing rate over the normal range of STN spontaneous rates. When constant current was used to drive the cells to higher rates, the PRC was altered in size and shape and accurate predictions of the effects of noise relied on incorporating these changes into the prediction. Application of rate-neutral changes in conductance showed that changes in PRC shape arise from conductance changes known to accompany rate increases in STN neurons, rather than the rate increases themselves. Our results show that firing patterns of densely perturbed oscillators cannot readily be distinguished from those of neurons randomly excited to fire from the rest state. The spike timing of repetitively firing neurons may be quantitatively predicted from the input and their PRCs, even when they are so densely perturbed that they no longer fire rhythmically.
Most neurons receive thousands of synaptic inputs per second. Each of these may be individually weak but collectively they shape the temporal pattern of firing by the postsynaptic neuron. If the postsynaptic neuron fires repetitively, its synaptic inputs need not directly trigger action potentials, but may instead control the timing of action potentials that would occur anyway. The phase resetting curve encapsulates the influence of an input on the timing of the next action potential, depending on its time of arrival. We measured the phase resetting curves of neurons in the subthalamic nucleus and used them to accurately predict the timing of action potentials in a phase model subjected to complex input patterns. A simple approximation to the phase model accurately predicted the changes in firing pattern evoked by dense patterns of noise pulses varying in amplitude and pulse duration, and by changes in firing rate. We also showed that the phase resetting curve changes systematically with changes in total neuron conductance, and doing so predicts corresponding changes in firing pattern. Our results indicate that the phase model may accurately represent the temporal integration of complex patterns of input to repetitively firing neurons.
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.
It is well-known that neurons communicate with short electric pulses, called action potentials or spikes. But how can spiking networks implement complex computations? Attempts to relate spiking network activity to results of deterministic computation steps, like the output bits of a processor in a digital computer, are conflicting with findings from cognitive science and neuroscience, the latter indicating the neural spike output in identical experiments changes from trial to trial, i.e., neurons are “unreliable”. Therefore, it has been recently proposed that neural activity should rather be regarded as samples from an underlying probability distribution over many variables which, e.g., represent a model of the external world incorporating prior knowledge, memories as well as sensory input. This hypothesis assumes that networks of stochastically spiking neurons are able to emulate powerful algorithms for reasoning in the face of uncertainty, i.e., to carry out probabilistic inference. In this work we propose a detailed neural network model that indeed fulfills these computational requirements and we relate the spiking dynamics of the network to concrete probabilistic computations. Our model suggests that neural systems are suitable to carry out probabilistic inference by using stochastic, rather than deterministic, computing elements.
Local field potential (LFP) oscillations are often accompanied by synchronization of activity within a widespread cerebral area. Thus, the LFP and neuronal coherence appear to be the result of a common mechanism that underlies neuronal assembly formation. We used the olfactory bulb as a model to investigate: (1) the extent to which unitary dynamics and LFP oscillations can be correlated and (2) the precision with which a model of the hypothesized underlying mechanisms can accurately explain the experimental data. For this purpose, we analyzed simultaneous recordings of mitral cell (MC) activity and LFPs in anesthetized and freely breathing rats in response to odorant stimulation. Spike trains were found to be phase-locked to the gamma oscillation at specific firing rates and to form odor-specific temporal patterns. The use of a conductance-based MC model driven by an approximately balanced excitatory-inhibitory input conductance and a relatively small inhibitory conductance that oscillated at the gamma frequency allowed us to provide one explanation of the experimental data via a mode-locking mechanism. This work sheds light on the way network and intrinsic MC properties participate in the locking of MCs to the gamma oscillation in a realistic physiological context and may result in a particular time-locked assembly. Finally, we discuss how a self-synchronization process with such entrainment properties can explain, under experimental conditions: (1) why the gamma bursts emerge transiently with a maximal amplitude position relative to the stimulus time course; (2) why the oscillations are prominent at a specific gamma frequency; and (3) why the oscillation amplitude depends on specific stimulus properties. We also discuss information processing and functional consequences derived from this mechanism.
Olfactory function relies on a chain of neural relays that extends from the periphery to the central nervous system and implies neural activity with various timescales. A central question in neuroscience is how information is encoded by the neural activity. In the mammalian olfactory bulb, local neural activity oscillations in the 40–80 Hz range (gamma) may influence the timing of individual neuron activities such that olfactory information may be encoded in this way. In this study, we first characterize in vivo the detailed activity of individual neurons relative to the oscillation and find that, depending on their state, neurons can exhibit periodic activity patterns. We also find, at least qualitatively, a relation between this activity and a particular odor. This is reminiscent of general physical phenomena—the entrainment by an oscillation—and to verify this hypothesis, in a second phase, we build a biologically realistic model mimicking these in vivo conditions. Our model confirms quantitatively this hypothesis and reveals that entrainment is maximal in the gamma range. Taken together, our results suggest that the neuronal activity may be specifically formatted in time during the gamma oscillation in such a way that it could, at this stage, encode the odor.
The response of a neuron to repeated somatic fluctuating current injections in vitro can elicit a reliable and precisely timed sequence of action potentials. The set of responses obtained across trials can also be interpreted as the response of an ensemble of similar neurons receiving the same input, with the precise spike times representing synchronous volleys that would be effective in driving postsynaptic neurons. To study the reproducibility of the output spike times for different conditions that might occur in vivo, we somatically injected aperiodic current waveforms into cortical neurons in vitro and systematically varied the amplitude and DC offset of the fluctuations. As the amplitude of the fluctuations was increased, reliability increased and the spike times remained stable over a wide range of values. However, at specific values called bifurcation points, large shifts in the spike times were obtained in response to small changes in the stimulus, resulting in multiple spike patterns that were revealed using an unsupervised classification method. Increasing the DC offset, which mimicked an overall increase in network background activity, also revealed bifurcation points and increased the reliability. Furthermore, the spike times shifted earlier with increasing offset. Although the reliability was reduced at bifurcation points, a theoretical analysis showed that the information about the stimulus time course was increased because each of the spike time patterns contained different information about the input.
Neurons respond with precise spike times to fluctuating current injections, leading to peaks in ensemble firing rate. The structure of these peaks, or spike events, provides a compact description of the neural response. We explore the consequences of precise spike times for neural coding in vivo, by investigating the spike event structure of virtual cell assemblies constructed in vitro. We incorporate diversity of electrophysiological response properties by varying the amplitude and offset of a common stimulus waveform injected in vitro. Across multiple trials, spike trains produce precise events in response to upswings in the stimulus, suggesting that such upswings in in vivo assemblies may effectively drive postsynaptic targets and transmit information about the stimulus. In simulations and in vitro, we identified bifurcations in the event structure as the amplitude or the offset was varied. Near bifurcation points, the neural response showed heightened sensitivity to intrinsic neural noise, leading to multiple competing response patterns, and enriching the representation of stimulus features by the ensemble output. The presence of bifurcations could therefore allow an ideal observer to extract more information about the stimulus. Our results suggest that event structure bifurcations may provide a mechanism for boosting information transmission in vivo.
Human perception, cognition, and action are supported by a complex network of interconnected brain regions. There is an increasing interest in measuring and characterizing these networks as a function of time and frequency, and inter-areal phase locking is often used to reveal these networks. This measure assesses the consistency of phase angles between the electrophysiological activity in two areas at a specific time and frequency. Non-invasively, the signals from which phase locking is computed can be measured with magnetoencephalography (MEG) and electroencephalography (EEG). However, due to the lack of spatial specificity of reconstructed source signals in MEG and EEG, inter-areal phase locking may be confounded by false positives resulting from crosstalk. Traditional phase locking estimates assume that no phase locking exists when the distribution of phase angles is uniform. However, this conjecture is not true when crosstalk is present. We propose a novel method to improve the reliability of the phase-locking measure by sampling phase angles from a baseline, such as from a prestimulus period or from resting-state data, and by contrasting this distribution against one observed during the time period of interest.
MEG; phase locking; oscillation; cross-talk; circular statistics
Previous studies have shown that neurons within the vestibular nuclei (VN) can faithfully encode the time course of sensory input through changes in firing rate in vivo. However, studies performed in vitro have shown that these same VN neurons often display nonlinear synchronization (i.e. phase locking) in their spiking activity to the local maxima of sensory input, thereby severely limiting their capacity for faithful encoding of said input through changes in firing rate. We investigated this apparent discrepancy by studying the effects of in vivo conditions on VN neuron activity in vitro using a simple, physiologically based, model of cellular dynamics. We found that membrane potential oscillations were evoked both in response to step and zap current injection for a wide range of channel conductance values. These oscillations gave rise to a resonance in the spiking activity that causes synchronization to sinusoidal current injection at frequencies below 25 Hz. We hypothesized that the apparent discrepancy between VN response dynamics measured in in vitro conditions (i.e., consistent with our modeling results) and the dynamics measured in vivo conditions could be explained by an increase in trial-to-trial variability under in vivo vs. in vitro conditions. Accordingly, we mimicked more physiologically realistic conditions in our model by introducing a noise current to match the levels of resting discharge variability seen in vivo as quantified by the coefficient of variation (CV). While low noise intensities corresponding to CV values in the range 0.04–0.24 only eliminated synchronization for low (<8 Hz) frequency stimulation but not high (>12 Hz) frequency stimulation, higher noise intensities corresponding to CV values in the range 0.5–0.7 almost completely eliminated synchronization for all frequencies. Our results thus predict that, under natural (i.e. in vivo) conditions, the vestibular system uses increased variability to promote fidelity of encoding by single neurons. This prediction can be tested experimentally in vitro.
The vestibular system senses the motion of the head in space and is vital for gaze stability, posture control, and the computation of spatial orientation during everyday life. The activities of single vestibular neurons recorded in the brains of awake behaving animals show that they can accurately transmit information about the time course of head motion, which is necessary for several behaviors such as the vestibulo-ocular reflex required for gaze stabilization. In contrast, this is not the case when the same neurons are recorded in isolation and sensory stimulation is mimicked experimentally. We investigated the cause for this discrepancy by studying how a mathematical model of vestibular neuron activity responds to mimics of sensory stimulation under different conditions. We found that the differences in the activities of vestibular neurons recorded in awake behaving animals and in isolation can be explained by the addition of synaptic noise, which in turn, increases the variability of action potential firing that is seen in more natural conditions. Our modeling results make a clear prediction that can be tested experimentally.
Oscillatory activity in neuronal networks correlates with different behavioral states throughout the nervous system, and the frequency-response characteristics of individual neurons are believed to be critical for network oscillations. Recent in vivo studies suggest that neurons experience periods of high membrane conductance, and that action potentials are often driven by membrane-potential fluctuations in the living animal. To investigate the frequency-response characteristics of CA1 pyramidal neurons in the presence of high conductance and voltage fluctuations, we performed dynamic-clamp experiments in rat hippocampal brain slices. We drove neurons with noisy stimuli that included a sinusoidal component ranging, in different trials, from 0.1 to 500 Hz. In subsequent data analysis, we determined action potential phase-locking profiles with respect to background conductance, average firing rate, and frequency of the sinusoidal component. We found that background conductance and firing rate qualitatively change the phase-locking profiles of CA1 pyramidal neurons vs. frequency. In particular, higher average spiking rates promoted band-pass profiles, and the high-conductance state promoted phase-locking at frequencies well above what would be predicted from changes in the membrane time constant. Mechanistically, spike-rate adaptation and frequency resonance in the spike-generating mechanism are implicated in shaping the different phase-locking profiles. Our results demonstrate that CA1 pyramidal cells can actively change their synchronization properties in response to global changes in activity associated with different behavioral states.
During an inspiration the output of hypoglossal (XII) motoneurons (HMs) in vitro is characterized by synchronous oscillatory firing in the 20 to 40 Hz range. In order to maintain synchronicity it is important that the cells fire with high reliability and precision. It is not known whether the intrinsic properties of HMs are tuned to maintain synchronicity when stimulated with time-varying inputs. We intracellularly recorded from HMs in an in vitro brainstem slice preparation from juvenile mice. Cells were held at or near spike threshold and were stimulated with steady or swept (ZAP) sine wave current functions (10 s duration; 0-40 Hz range). Peri-stimulus time histograms (PSTHs) were constructed from spike times based on threshold crossings. Synaptic transmission was suppressed by including blockers of GABAergic, glycinergic and glutamatergic neurotransmission in the bath solution. Cells responded to sine wave stimulation with bursts of action potentials at low (<3-5 Hz) sine wave frequency while they phase-locked 1:1 to the stimulus at intermediate frequencies (3-25 Hz). Beyond the 1:1 frequency range cells were able to phase-lock to sub-harmonics (1:2, 1:3 or 1:4) of the input frequency. The 1:1 phase-locking range increased with increasing stimulus amplitude and membrane depolarization. Reliability and spike timing precision was highest when the cells phase-locked 1:1 to the stimulus.
Our findings suggests that the coding of time-varying inspiratory synaptic inputs by individual HMs is most reliable and precise at frequencies that are generally lower than the frequency of the synchronous inspiratory oscillatory activity recorded from the XII nerve.
respiration; motoneuron; patch-clamp
Transduction of graded synaptic input into trains of all-or-none action
potentials (spikes) is a crucial step in neural coding. Hodgkin identified three
classes of neurons with qualitatively different analog-to-digital transduction
properties. Despite widespread use of this classification scheme, a
generalizable explanation of its biophysical basis has not been described. We
recorded from spinal sensory neurons representing each class and reproduced
their transduction properties in a minimal model. With phase plane and
bifurcation analysis, each class of excitability was shown to derive from
distinct spike initiating dynamics. Excitability could be converted between all
three classes by varying single parameters; moreover, several parameters, when
varied one at a time, had functionally equivalent effects on excitability. From
this, we conclude that the spike-initiating dynamics associated with each of
Hodgkin's classes represent different outcomes in a nonlinear
competition between oppositely directed, kinetically mismatched currents. Class
1 excitability occurs through a saddle node on invariant circle bifurcation when
net current at perithreshold potentials is inward (depolarizing) at steady
state. Class 2 excitability occurs through a Hopf bifurcation when, despite net
current being outward (hyperpolarizing) at steady state, spike initiation occurs
because inward current activates faster than outward current. Class 3
excitability occurs through a quasi-separatrix crossing when fast-activating
inward current overpowers slow-activating outward current during a stimulus
transient, although slow-activating outward current dominates during constant
stimulation. Experiments confirmed that different classes of spinal lamina I
neurons express the subthreshold currents predicted by our simulations and,
further, that those currents are necessary for the excitability in each cell
class. Thus, our results demonstrate that all three classes of excitability
arise from a continuum in the direction and magnitude of subthreshold currents.
Through detailed analysis of the spike-initiating process, we have explained a
fundamental link between biophysical properties and qualitative differences in
how neurons encode sensory input.
Information is transmitted through the nervous system in the form of action
potentials or spikes. Contrary to popular belief, a spike is not generated
instantaneously when membrane potential crosses some preordained threshold. In
fact, different neurons employ different rules to determine when and why they
spike. These different rules translate into diverse spiking patterns that have
been observed experimentally and replicated time and again in computational
models. In this study, our aim was not simply to replicate different spiking
patterns; instead, we sought to provide deeper insight into the connection
between biophysics and neural coding by relating each to the process of spike
initiation. We show that Hodgkin's three classes of excitability result
from a nonlinear competition between oppositely directed, kinetically mismatched
currents; the outcome of that competition is manifested as dynamically distinct
spike-initiating mechanisms. Our results highlight the benefits of forward
engineering minimal models capable of reproducing phenomena of interest and then
dissecting those models in order to identify general explanations of how those
phenomena arise. Furthermore, understanding nonlinear dynamical processes such
as spike initiation is crucial for definitively explaining how biophysical
properties impact neural coding.
In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks – the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2–150 Hz), firing rates of the inputs change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.
spike-timing dependent plasticity; oscillations; phase-locking; stable learning; stability of neuronal plasticity; place fields
Interaural time difference (ITD), or the difference in timing of a sound wave arriving at the two ears, is a fundamental cue for sound localization. A wide variety of animals have specialized neural circuits dedicated to the computation of ITDs. In the avian auditory brainstem, ITDs are encoded as the spike rates in the coincidence detector neurons of the nucleus laminaris (NL). NL neurons compare the binaural phase-locked inputs from the axons of ipsi- and contralateral nucleus magnocellularis (NM) neurons. Intracellular recordings from the barn owl's NL in vivo showed that tonal stimuli induce oscillations in the membrane potential. Since this oscillatory potential resembled the stimulus sound waveform, it was named the sound analog potential (Funabiki et al., 2011). Previous modeling studies suggested that a convergence of phase-locked spikes from NM leads to an oscillatory membrane potential in NL, but how presynaptic, synaptic, and postsynaptic factors affect the formation of the sound analog potential remains to be investigated. In the accompanying paper, we derive analytical relations between these parameters and the signal and noise components of the oscillation. In this paper, we focus on the effects of the number of presynaptic NM fibers, the mean firing rate of these fibers, their average degree of phase-locking, and the synaptic time scale. Theoretical analyses and numerical simulations show that, provided the total synaptic input is kept constant, changes in the number and spike rate of NM fibers alter the ITD-independent noise whereas the degree of phase-locking is linearly converted to the ITD-dependent signal component of the sound analog potential. The synaptic time constant affects the signal more prominently than the noise, making faster synaptic input more suitable for effective ITD computation.
phase-locking; sound localization; auditory brainstem; periodic signals; oscillation; owl
Understanding how neural and behavioral timescales interact to influence cortical activity and stimulus coding is an important issue in sensory neuroscience. In air-breathing animals, voluntary changes in respiratory frequency alter the temporal patterning olfactory input. In the olfactory bulb, these behavioral timescales are reflected in the temporal properties of mitral/tufted (M/T) cell spike trains. As the odor information contained in these spike trains is relayed from the bulb to the cortex, interactions between presynaptic spike timing and short-term synaptic plasticity dictate how stimulus features are represented in cortical spike trains. Here we demonstrate how the timescales associated with respiratory frequency, spike timing and short-term synaptic plasticity interact to shape cortical responses. Specifically, we quantified the timescales of short-term synaptic facilitation and depression at excitatory synapses between bulbar M/T cells and cortical neurons in slices of mouse olfactory cortex. We then used these results to generate simulated M/T population synaptic currents that were injected into real cortical neurons. M/T population inputs were modulated at frequencies consistent with passive respiration or active sniffing. We show how the differential recruitment of short-term plasticity at breathing versus sniffing frequencies alters cortical spike responses. For inputs at sniffing frequencies, cortical neurons linearly encoded increases in presynaptic firing rates with increased phase locked, firing rates. In contrast, at passive breathing frequencies, cortical responses saturated with changes in presynaptic rate. Our results suggest that changes in respiratory behavior can gate the transfer of stimulus information between the olfactory bulb and cortex.
The inferior olivary nucleus provides one of the two main inputs to the cerebellum: the so-called climbing fibers. Activation of climbing fibers is generally believed to be related to timing of motor commands and/or motor learning. Climbing fiber spikes lead to large all-or-none action potentials in cerebellar Purkinje cells, overriding any other ongoing activity and silencing these cells for a brief period of time afterwards. Empirical evidence shows that the climbing fiber can transmit a short burst of spikes as a result of an olivary cell somatic spike, potentially increasing the information being transferred to the cerebellum per climbing fiber activation. Previously reported results from in vitro studies suggested that the information encoded in the climbing fiber burst is related to the occurrence of the spike relative to the ongoing sub-threshold membrane potential oscillation of the olivary cell, i.e. that the phase of the oscillation is reflected in the size of the climbing fiber burst. We used a detailed three-compartmental model of an inferior olivary cell to further investigate the possible factors determining the size of the climbing fiber burst. Our findings suggest that the phase-dependency of the burst size is present but limited and that charge flow between soma and dendrite is a major determinant of the climbing fiber burst. From our findings it follows that phenomena such as cell ensemble synchrony can have a big effect on the climbing fiber burst size through dendrodendritic gap-junctional coupling between olivary cells.
The inferior olive is a nucleus in the brain stem with neurons that exhibit continuous sub-threshold activity and are electrically coupled by gap junctions. It is implicated in execution and learning of motor skills and it is often assumed that it provides a teacher signal to the cerebellum. Models based on this theory generally require a continuously updated quantitative value to be sent to the cerebellum, yet the inferior olive fires spikes at a low rate of approximately 1 Hz, making reconciliation of model and biological system problematic. However, it has also been shown that olivary cells can generate an axonal burst of spikes for every somatically recorded action potential, theoretically rendering them capable of transmitting more information per event. Using a detailed neuronal model of an inferior olive cell, we examined what factors may underlie the axonal burst size. We found that leak currents between dendrite and soma and electrically coupled dendrites are major determinants. From our findings and current literature it follows that the inferior olive may be capable of adapting the speed at which motor tasks in the cerebellum are learned, depending on the synchrony of sub-threshold activity in clusters of electrically coupled cells.
It is generally assumed that axons use action potentials (APs) to transmit information fast and reliably to synapses. Yet, the reliability of transmission along fibers below 0.5 μm diameter, such as cortical and cerebellar axons, is unknown. Using detailed models of rodent cortical and squid axons and stochastic simulations, we show how conduction along such thin axons is affected by the probabilistic nature of voltage-gated ion channels (channel noise). We identify four distinct effects that corrupt propagating spike trains in thin axons: spikes were added, deleted, jittered, or split into groups depending upon the temporal pattern of spikes. Additional APs may appear spontaneously; however, APs in general seldom fail (<1%). Spike timing is jittered on the order of milliseconds over distances of millimeters, as conduction velocity fluctuates in two ways. First, variability in the number of Na channels opening in the early rising phase of the AP cause propagation speed to fluctuate gradually. Second, a novel mode of AP propagation (stochastic microsaltatory conduction), where the AP leaps ahead toward spontaneously formed clusters of open Na channels, produces random discrete jumps in spike time reliability. The combined effect of these two mechanisms depends on the pattern of spikes. Our results show that axonal variability is a general problem and should be taken into account when considering both neural coding and the reliability of synaptic transmission in densely connected cortical networks, where small synapses are typically innervated by thin axons. In contrast we find that thicker axons above 0.5 μm diameter are reliable.
Neurons in cerebral cortex achieve wiring densities of 4 km per mm3 by using unmyelinated axons of 0.3 μm average diameter as wires. Many axons (e.g., pain fibers) are thinner. Although, as in computer chips, wire miniaturization economizes on space and energy, it increases the noise introduced by thermodynamic fluctuations in a neuron's “protein transistors,” voltage-gated ion channels. We investigated how well the relatively small number of ion channels found in the membranes of tiny axons propagate the brain's universal signal—the action potential. We built a stochastic model that incorporates the random behavior of individual ion channels and found noise effects much larger than previously assumed, because standard stochastic approximation techniques (Langevin) break down because single channels can produce whole-cell responses. Channel noise destroys information encoded in the timing of action potentials, by randomly varying the speed of conduction, and produces a novel mode of transmission, stochastic microsaltatory conduction. Ion channel populations retain memory of previous activity in the distribution of channel states, causing action potential reliability to vary with context. The effects and general relationships identified here will govern other cell-signaling systems that rely on inherently noisy protein switches to propagate signals, either for intracellular communication (Ca++/cAMP waves) or in nanotechnology.
High-frequency oscillations (HFOs) are an emerging biomarker for epileptic tissue. Yet the mechanism by which HFOs are produced is unknown, and their rarity makes them difficult to study. Our objective was to examine the occurrence of HFOs in relation to action potentials (APs) and the effect of microstimulation in the tetanus toxin (TT) model of epilepsy, a non-lesional model with a short latency to spontaneous seizures.
Rats were injected with TT into dorsal hippocampus and implanted with a 16 channel (8 × 2) multielectrode array, one row each in CA3 and CA1. After onset of spontaneous seizures (3-9 days), recordings were begun of APs and local field potentials, analyzed for the occurrence of interictal spikes and HFOs. Recordings were made during microstimulation of each electrode using customized, open-source software.
Population bursts of APs during interictal spikes were phase-locked with HFOs, which were observable almost exclusively with high-amplitude interictal spikes. Further, HFOs could reliably be produced by microstimulation of the hippocampus, providing evidence that these oscillations can be controlled temporally by external means.
We show for the first time the occurrence of HFOs in the TT epilepsy model, an attractive preparation for their experimental investigation and, importantly, one with a different etiology than status models, providing further evidence of the generality of HFOs. The ability to provoke HFOs with microstimulation may prove useful for better understanding HFOs by directly evoking them in the lab, and designing high-throughput techniques for pre-surgical localization of the epileptic focus.
Oscillations; microelectrode; stimulation; interictal spike; electrocorticography; animal model