Brief synaptic inhibition can overwhelm a nearly coincident suprathreshold excitatory input to preclude spike generation. Surprisingly, a brief inhibitory event that occurs in a favorable time window preceding an otherwise subthreshold excitation can facilitate spiking. Such postinhibitory facilitation (PIF) requires that the inhibition has a short (decay) time constant τinh. The timescale ranges of τinh and of the window (width and timing) for PIF depend on the rates of neuronal subthreshold dynamics. The mechanism for PIF is general, involving reduction by hyperpolarization of some excitability-suppressing factor that is partially recruited at rest. Here we illustrate and analyze PIF, experimentally and theoretically, using brain stem auditory neurons and a conductance-based five-variable model. In this auditory case, PIF timescales are in the sub- to few millisecond range and the primary mechanistic factor is a low-threshold potassium conductance gKLT. Competing dynamic influences create the favorable time window: hyperpolarization that moves V away from threshold and hyper-excitability resulting from reduced gKLT. A two-variable reduced model that retains the dynamics only of V and gKLT displays a similar time window. We analyze this model in the phase plane; its geometry has generic features. Further generalizing, we show that PIF behavior may occur even in a very reduced model with linear subthreshold dynamics, by using an integrate-and-fire model with an accommodating voltage-dependent threshold. Our analyses of PIF provide insights for fast inhibition’s facilitatory effects in longer trains. Periodic subthreshold excitatory inputs can lead to firing, even one for one, if brief inhibitory inputs are interleaved within a range of favorable phase lags. The temporal specificity of inhibition’s facilitating effect could play a role in temporal processing, in sensitivity to inhibitory and excitatory temporal patterning, in the auditory and other neural systems.
Voltage-dependent membrane conductances support specific neurophysiological properties. To investigate the mechanisms of coincidence detection, we activated gerbil medial superior olivary (MSO) neurons with dynamic current-clamp stimuli in vitro. Spike-triggered reverse-correlation analysis for injected current was used to evaluate the integration of subthreshold noisy signals. Consistent with previous reports, the partial blockade of low-threshold potassium channels (IKLT) reduced coincidence detection by slowing the rise of current needed on average to evoke a spike. However, two factors point toward the involvement of a second mechanism. First, the reverse correlation currents revealed that spike generation was associated with a preceding hyperpolarization. Second, rebound action potentials are 45% larger compared to depolarization-evoked spikes in the presence of an IKLT antagonist. These observations suggest that the sodium current (INa) was substantially inactivated at rest. To test this idea, INa was enhanced by increasing extracellular sodium concentration. This manipulation reduced coincidence detection, as reflected by slower spike-triggering current, and diminished the hyperpolarization phase in the reverse-correlation currents. As expected, a small outward bias current decreased the pre-spike hyperpolarization phase, and TTX blockade of INa nearly eliminated the hyperpolarization phase in the reverse correlation current. A computer model including Hodgkin-Huxley type conductances for spike generation and for IKLT showed reduction in coincidence detection when IKLT was reduced or when INa was increased. We hypothesize that desirable synaptic signals first remove some inactivation of INa and reduce activation of IKLT to create a brief temporal window for coincidence detection of subthreshold noisy signals.
Many auditory neurons possess low-threshold potassium currents (IKLT ) that enhance their responsiveness to rapid and coincident inputs. We present recordings from gerbil medial superior olivary (MSO) neurons in vitro and modeling results that illustrate how IKLT improves the detection of brief signals, of weak signals in noise, and of the coincidence of signals (as needed for sound localization). We quantify the enhancing effect of IKLT on temporal processing with several measures: signal-to-noise ratio (SNR), reverse correlation or spike-triggered averaging of input currents, and inter-aural time difference (ITD) tuning curves. To characterize how IKLT, which activates below spike threshold, influences a neuron’s voltage rise toward threshold, i.e., how it filters the inputs, we focus first on the response to weak and noisy signals. Cells and models were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the “noise”), together with a larger but still subthreshold postsynaptic conductance, EPSG (the “signal”). Reduction of IKLT decreased the SNR, mainly due to an increase in spontaneous firing (more “false positive”). The spike-triggered reverse correlation indicated that IKLT shortened the integration time for spike generation. IKLT also heightened the model’s timing selectivity for coincidence detection of simulated binaural inputs. Further, ITD tuning is shifted in favor of a slope code rather than a place code by precise and rapid inhibition onto MSO cells (Brand et al. 2002). In several ways, low-threshold outward currents are seen to shape integration of weak and strong signals in auditory neurons.
We report a facilitatory role of inhibitory synaptic input that can enhance a neuron’s firing rate, in contrast to the conventional belief that inhibition suppresses firing. We study this phenomenon using the Hodgkin-Huxley model of spike generation with random Poisson trains of subthreshold excitatory and inhibitory inputs. Enhancement occurs when, by chance, brief inhibition leads excitation with a favorable timing and counterintuitively induces a reduction of the spike threshold. The basic mechanism is also illustrated with the phase-plane analysis of a two variable model.
Neurons possess multiple voltage-dependent conductances specific for their function. To investigate how low-threshold outward currents improve the detection of small signals in a noisy background, we recorded from gerbil medial superior olivary (MSO) neurons in vitro. MSO neurons responded phasically, with a single spike to a step current injection. When bathed in dendrotoxin (DTX), most cells switched to tonic firing, suggesting that low-threshold potassium currents (IKLT ) participated in shaping these phasic responses. Neurons were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the “noise”), together with a larger but still subthreshold postsynaptic conductance, EPSG (the “signal”). DTX reduced the signal-to-noise ratio (SNR), defined as the ratio of probability to fire in response to the EPSG and the probability to fire spontaneously in response to noise. The reduction was mainly attributable to the increase of spontaneous firing in DTX. The spike-triggered reverse correlation indicated that, for spike generation, the neuron with IKLT required faster inward current transients. This narrow temporal integration window contributed to superior phase locking of firing to periodic stimuli before application of DTX. A computer model including Hodgkin-Huxley type conductances for spike generation and for IKLT (Rathouz and Trussell, 1998) showed similar response statistics. The dynamic low-threshold outward current increased SNR and the temporal precision of integration of weak sub-threshold signals in auditory neurons by suppressing false positives.
medial superior olive; signal-to-noise ratio; phase locking; computer model; potassium conductance; slice recordings
Subthreshold voltage- and time-dependent conductances can subserve different roles in signal integration and action potential generation. Here, we use minimal models to demonstrate how a non-inactivating low-threshold outward current (IKLT) can enhance the precision of small-signal integration. Our integrate-and-fire models have only a few biophysical parameters, enabling a parametric study of IKLT's effects. IKLT increases the signal-to-noise ratio (SNR) for firing when a subthreshold `signal' EPSP is delivered in the presence of weak random input. The increased SNR is due to the suppression of spontaneous firings to random input. In accordance, SNR grows as the EPSP amplitude increases. SNR also grows as the unitary synaptic current's time constant increases, leading to more effective suppression of spontaneous activity. Spike-triggered reverse correlation of the injected current indicates that,to reach spike threshold, a cell with IKLT requires a briefer time course of injected current. Consistent with this narrowed integration time window, IKLT enhances phase-locking, measured as vector strength, to a weak noisy and periodically modulated stimulus. Thus subthreshold negative feedback mediated by IKLT enhances temporal processing. An alternative suppression mechanism is voltage- and time-dependent inactivation of a low-threshold inward current. This feature in an integrate-and-fire model also shows SNR enhancement, in comparison with a case when the inward current is non-inactivating. Small-signal detection can be significantly improved in noisy neuronal systems by subthreshold negative feedback, serving to suppress false positives.
When an ambiguous stimulus is viewed for a prolonged time, perception alternates between the different possible interpretations of the stimulus. The alternations seem haphazard, but closer inspection of their dynamics reveals systematic properties in many bistable phenomena. Parametric manipulations result in gradual changes in the fraction of time a given interpretation dominates perception, often over the entire possible range of zero to one. The mean dominance durations of the competing interpretations can also vary over wide ranges (from less than a second to dozens of seconds or more), but finding systematic relations in how they vary has proven difficult. Following the pioneering work of W. J. M. Levelt (1968) in binocular rivalry, previous studies have sought to formulate a relation in terms of the effect of physical parameters of the stimulus, such as image contrast in binocular rivalry. However, the link between external parameters and “stimulus strength” is not as obvious for other bistable phenomena. Here we show that systematic relations readily emerge when the mean dominance durations are examined instead as a function of “percept strength,” as measured by the fraction of dominance time, and provide theoretical rationale for this observation. For three different bistable phenomena, plotting the mean dominance durations of the two percepts against the fraction of dominance time resulted in complementary curves with near-perfect symmetry around equi-dominance (the point where each percept dominates half the time). As a consequence, the alternation rate reaches a maximum at equi-dominance. We next show that the observed behavior arises naturally in simple double-well energy models and in neural competition models with cross-inhibition and input normalization. Finally, we discuss the possibility that bistable perceptual switches reflect a perceptual “exploratory” strategy, akin to foraging behavior, which leads naturally to maximal alternation rate at equi-dominance if perceptual switches come with a cost.
perceptual organization; motion—2D; computational modeling
dynamics; firing; latency; bistability; A-type current
Neurons in the medial superior olive (MSO) process sound localization cues through binaural coincidence detection, in which excitatory synaptic inputs from each ear are segregated onto different branches of a bipolar dendritic structure and sum at the soma and axon with submillisecond time resolution. Although synaptic timing and dynamics critically shape this remarkable computation, synaptic interactions with intrinsic ion channels have received less attention. Using paired somatic and dendritic patch-clamp recordings in gerbil brainstem slices together with compartmental modeling, we show that activation of Kv1 channels by dendritic EPSPs accelerates membrane repolarization in a voltage-dependent manner and actively improves the time resolution of synaptic integration. We demonstrate that a somatically biased gradient of Kv1 channels underlies the degree of compensation for passive cable filtering during propagation of EPSPs in dendrites. Thus both the spatial distribution and properties of Kv1 channels play a key role in preserving binaural synaptic timing.
The neural mechanisms underlying the temporal control of behavior are largely unknown. Here we recorded from the medial agranular cortex in rats trained to respond on a temporal production procedure for probabilistically available food reward. Due to variability in estimating the time of food availability, robust responding typically bracketed the expected duration, starting some time before and ending some time after the signaled delay. This response period provided an analytic “steady-state” window during which the subject actively timed their behavior. Remarkably, during these response periods, a variety of firing patterns were seen which could be broadly described as ramps, peaks, and dips, with different slopes, directions, and times at which maxima or minima occur. Regularized linear discriminant analysis indicated that these patterns provided sufficiently reliable information to discriminate the elapsed duration of responding within these response periods. Modeling this across neuron variability showed that the utilization of ramps, dips, and peaks with different slopes and minimal/maximal rates at different times led to a substantial improvement in temporal prediction errors, suggesting that heterogeneity in the neural representation of elapsed time may facilitate temporally controlled behavior.
interval timing; pre-motor cortex; internal clock; peak procedure; discriminant analysis
In neurons of the medial superior olive (MSO), voltage-gated ion channels control the submillisecond time resolution of binaural coincidence detection, but little is known about their interplay during trains of synaptic activity that would be experienced during auditory stimuli. Here, using modeling and patch-clamp recordings from MSO principal neurons in gerbil brainstem slices, we examined interactions between two major currents controlling subthreshold synaptic integration: a low voltage-activated potassium current (IK-LVA) and a hyperpolarization-activated cation current (Ih). Both Ih and IK-LVA contributed strongly to the resting membrane conductance, and during trains of simulated EPSPs exhibited cumulative deactivation and inactivation, respectively. In current-clamp recordings, regular and irregular trains of simulated EPSCs increased input resistance up to 60%, effects that accumulated and decayed (post-train) over hundreds of milliseconds. Surprisingly, the mean voltage and peaks of EPSPs increased by only a few millivolts during trains. Using a model of an MSO cell we demonstrated that the nearly uniform response during modest depolarizing stimuli relied on changes in Ih and IK-LVA, such that their sum remained nearly constant over time. Experiments and modeling showed that for simplified binaural stimuli (EPSC pairs in a noisy background) spike probability gradually increased in parallel with the increasing input resistance. Nevertheless, the interplay between Ih and IK-LVA helps to maintain a nearly uniform shape of individual synaptic responses, and we show that the time resolution of synaptic coincidence detection can be maintained during trains if EPSC size gradually decreases (as in synaptic depression), counteracting slow increases in excitability.
Auditory; Temporal coding; Localization; Potassium channels; hearing; brainstem; binaural
During coordinated eye– hand movements, saccade reaction times (SRTs) and reach reaction times (RRTs) are correlated in humans and monkeys. Reaction times (RTs) measure the degree of movement preparation and can correlate with movement speed and accuracy. However, RTs can also reflect effector nonspecific influences, such as motivation and arousal. We use a combination of behavioral psychophysics and computational modeling to identify plausible mechanisms for correlations in SRTs and RRTs. To disambiguate nonspecific mechanisms from mechanisms specific to movement coordination, we introduce a dual-task paradigm in which a reach and a saccade are cued with a stimulus onset asynchrony (SOA). We then develop several variants of integrate-to-threshold models of RT, which postulate that responses are initiated when the neural activity encoding effector-specific movement preparation reaches a threshold. The integrator models formalize hypotheses about RT correlations and make predictions for how each RT should vary with SOA. To test these hypotheses, we trained three monkeys to perform the eye– hand SOA task and analyzed their SRTs and RRTs. In all three subjects, RT correlations decreased with increasing SOA duration. Additionally, mean SRT decreased with decreasing SOA, revealing facilitation of saccades with simultaneous reaches, as predicted by the model. These results are not consistent with the predictions of the models with common modulation or common input but are compatible with the predictions of a model with mutual excitation between two effector-specific integrators. We propose that RT correlations are not simply attributable to motivation and arousal and are a signature of coordination.
Biological systems are characterized by a high number of interacting components. Determining the role of each component is difficult, addressed here in the context of biological oscillations. Rhythmic behavior can result from the interplay of positive feedback that promotes bistability between high and low activity, and slow negative feedback that switches the system between the high and low activity states. Many biological oscillators include two types of negative feedback processes: divisive (decreases the gain of the positive feedback loop) and subtractive (increases the input threshold) that both contribute to slowly move the system between the high- and low-activity states. Can we determine the relative contribution of each type of negative feedback process to the rhythmic activity? Does one dominate? Do they control the active and silent phase equally? To answer these questions we use a neural network model with excitatory coupling, regulated by synaptic depression (divisive) and cellular adaptation (subtractive feedback). We first attempt to apply standard experimental methodologies: either passive observation to correlate the variations of a variable of interest to system behavior, or deletion of a component to establish whether a component is critical for the system. We find that these two strategies can lead to contradictory conclusions, and at best their interpretive power is limited. We instead develop a computational measure of the contribution of a process, by evaluating the sensitivity of the active (high activity) and silent (low activity) phase durations to the time constant of the process. The measure shows that both processes control the active phase, in proportion to their speed and relative weight. However, only the subtractive process plays a major role in setting the duration of the silent phase. This computational method can be used to analyze the role of negative feedback processes in a wide range of biological rhythms.
As modern experimental techniques uncover new components in biological systems and describe their mutual interactions, the problem of determining the contribution of each component becomes critical. The many feedback loops created by these interactions can lead to oscillatory behavior. Examples of oscillations in biology include the cell cycle, circadian rhythms, the electrical activity of excitable cells, and predator-prey systems. While we understand how negative feedback loops can cause oscillations, when multiple feedback loops are present it becomes difficult to identify the dominant mechanism(s), if any. We address the problem of establishing the relative contribution of a feedback process using a biological oscillator model for which oscillations are controlled by two types of slow negative feedback. To determine which is the dominant process, we first use standard experimental methodologies: either passive observation to correlate a variable's behavior to system activity, or deletion of a component to establish whether that component is critical for the system. We find that these methods have limited applicability to the determination of the dominant process. We then develop a new quantitative measure of the contribution of each process to the oscillations. This computational method can be extended to a wide variety of oscillatory systems.
We investigate analytically a firing rate model for a two-population network based on mutual inhibition and slow negative feedback in the form of spike frequency adaptation. Both neuronal populations receive external constant input whose strength determines the system’s dynamical state—a steady state of identical activity levels or periodic oscillations or a winner-take-all state of bistability. We prove that oscillations appear in the system through supercritical Hopf bifurcations and that they are antiphase. The period of oscillations depends on the input strength in a nonmonotonic fashion, and we show that the increasing branch of the period versus input curve corresponds to a release mechanism and the decreasing branch to an escape mechanism. In the limiting case of infinitely slow feedback we characterize the conditions for release, escape, and occurrence of the winner-take-all behavior. Some extensions of the model are also discussed.
Hopf bifurcation; antiphase oscillations; slow negative feedback; winner-take-all; release and escape; binocular rivalry; central pattern generators
Perceptual bistability occurs when a physical stimulus gives rise to two distinct interpretations that alternate irregularly. Noise and adaptation processes are two possible mechanisms for switching in neuronal competition models that describe the alternating behaviors. Either of these processes, if strong enough, could alone cause the alternations in dominance. We examined their relative role in producing alternations by studying models where by smoothly varying the parameters, one can change the rhythmogenesis mechanism from being adaptation-driven to noise-driven. In consideration of the experimental constraints on the statistics of the alternations (mean and shape of the dominance duration distribution and correlations between successive durations) we ask whether we can rule out one of the mechanisms. We conclude that in order to comply with the observed mean of the dominance durations and their coefficient of variation, the models must operate within a balance between the noise and adaptation strength—both mechanisms are involved in producing alternations, in such a way that the system operates near the boundary between being adaptation-driven and noise-driven.
Bistability; Competition; Mutual inhibition; Attractor; Oscillator; Noise
In order to localize sounds in the environment, the auditory system detects and encodes differences in signals between each ear. The exquisite sensitivity of auditory brain stem neurons to the differences in rise time of the excitation signals from the two ears allows for neuronal encoding of microsecond interaural time differences.
Low-frequency sound localization depends on the neural computation of interaural time differences (ITD) and relies on neurons in the auditory brain stem that integrate synaptic inputs delivered by the ipsi- and contralateral auditory pathways that start at the two ears. The first auditory neurons that respond selectively to ITD are found in the medial superior olivary nucleus (MSO). We identified a new mechanism for ITD coding using a brain slice preparation that preserves the binaural inputs to the MSO. There was an internal latency difference for the two excitatory pathways that would, if left uncompensated, position the ITD response function too far outside the physiological range to be useful for estimating ITD. We demonstrate, and support using a biophysically based computational model, that a bilateral asymmetry in excitatory post-synaptic potential (EPSP) slopes provides a robust compensatory delay mechanism due to differential activation of low threshold potassium conductance on these inputs and permits MSO neurons to encode physiological ITDs. We suggest, more generally, that the dependence of spike probability on rate of depolarization, as in these auditory neurons, provides a mechanism for temporal order discrimination between EPSPs.
Animals can locate the source of a sound by detecting microsecond differences in the arrival time of sound at the two ears. Neurons encoding these interaural time differences (ITDs) receive an excitatory synaptic input from each ear. They can perform a microsecond computation with excitatory synapses that have millisecond time scale because they are extremely sensitive to the input's “rise time,” the time taken to reach the peak of the synaptic input. Current theories assume that the biophysical properties of the two inputs are identical. We challenge this assumption by showing that the rise times of excitatory synaptic potentials driven by the ipsilateral ear are faster than those driven by the contralateral ear. Further, we present a computational model demonstrating that this disparity in rise times, together with the neurons' sensitivity to excitation's rise time, can endow ITD-encoding with microsecond resolution in the biologically relevant range. Our analysis also resolves a timing mismatch. The difference between contralateral and ipsilateral latencies is substantially larger than the relevant ITD range. We show how the rise time disparity compensates for this mismatch. Generalizing, we suggest that phasic-firing neurons—those that respond to rapidly, but not to slowly, changing stimuli—are selective to the temporal ordering of brief inputs. In a coincidence-detection computation the neuron will respond more robustly when a faster input leads a slower one, even if the inputs are brief and have similar amplitudes.
Fundamental properties of phasic firing neurons are usually characterized in a noise-free condition. In the absence of noise, phasic neurons exhibit Class 3 excitability, which is a lack of repetitive firing to steady current injections. For time-varying inputs, phasic neurons are band-pass filters or slope detectors, because they do not respond to inputs containing exclusively low frequencies or shallow slopes. However, we show that in noisy conditions, response properties of phasic neuron models are distinctly altered. Noise enables a phasic model to encode low-frequency inputs that are outside of the response range of the associated deterministic model. Interestingly, this seemingly stochastic-resonance (SR) like effect differs significantly from the classical SR behavior of spiking systems in both the signal-to-noise ratio and the temporal response pattern. Instead of being most sensitive to the peak of a subthreshold signal, as is typical in a classical SR system, phasic models are most sensitive to the signal's rising and falling phases where the slopes are steep. This finding is consistent with the fact that there is not an absolute input threshold in terms of amplitude; rather, a response threshold is more properly defined as a stimulus slope/frequency. We call the encoding of low-frequency signals with noise by phasic models a slope-based SR, because noise can lower or diminish the slope threshold for ramp stimuli. We demonstrate here similar behaviors in three mechanistic models with Class 3 excitability in the presence of slow-varying noise and we suggest that the slope-based SR is a fundamental behavior associated with general phasic properties rather than with a particular biological mechanism.
Principal brain cells, called neurons, show a tremendous amount of diversity in their responses to driving stimuli. A widely present but understudied class of neurons prefers to respond to high-frequency inputs and neglect slow variations; these cells are called phasic neurons. Although phasic neurons do not normally respond to slow signals, we show that noise, a ubiquitous neural input, can enable them to respond to distinct features of slow signals. We emphasize the fact that, in the presence of noise, they are still sensitive to the change in stimulus, rather than to the constant part of the slow inputs, just as they are for fast inputs without noise. This feature distinguishes the response of phasic neurons from those of other neurons, which show more sensitivity to the amplitude of their inputs. We believe that our study has significantly broadened the understanding about the information-processing ability and functional roles of phasic neurons.
The interplay between inhibition and excitation is at the core of cortical network activity. In many cortices, including auditory cortex (ACx), interactions between excitatory and inhibitory neurons generate synchronous network gamma oscillations (30–70 Hz). Here, we show that differences in the connection patterns and synaptic properties of excitatory-inhibitory microcircuits permit the spatial extent of network inputs to modulate the magnitude of gamma oscillations. Simultaneous multiple whole-cell recordings from connected fast-spiking (FS) interneurons and pyramidal cells (PC) in L2/3 of mouse ACx slices revealed that for intersomatic distances <50 µm, most inhibitory connections occurred in reciprocally connected (RC) pairs; at greater distances, inhibitory connections were equally likely in RC and non-reciprocally connected (nRC) pairs. Furthermore, the GABAB mediated inhibition in RC pairs was weaker than in nRC pairs. Simulations with a network model that incorporated these features showed strong, gamma-band oscillations only when the network inputs were confined to a small area. These findings suggest a novel mechanism by which oscillatory activity can be modulated by adjusting the spatial distribution of afferent input.
Auditory cortex; Connectivity; Inhibition; GABAB receptor; Gamma; Network
When a stimulus supports two distinct interpretations, perception alternates in an irregular manner between them. What causes the bistable perceptual switches remains an open question. Most existing models assume that switches arise from a slow fatiguing process, such as adaptation or synaptic depression. We develop a new, attractor-based framework in which alternations are induced by noise and are absent without it. Our model goes beyond previous energy-based conceptualizations of perceptual bistability by constructing a neurally plausible attractor model that is implemented in both firing rate mean-field and spiking cell-based networks. The model accounts for known properties of bistable perceptual phenomena, most notably the increase in alternation rate with stimulation strength observed in binocular rivalry. Furthermore, it makes a novel prediction about the effect of changing stimulus strength on the activity levels of the dominant and suppressed neural populations, a prediction that could be tested with functional MRI or electrophysiological recordings. The neural architecture derived from the energy-based model readily generalizes to several competing populations, providing a natural extension for multistability phenomena.