Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
The cortical amygdala receives direct olfactory inputs and is thought to participate in processing and learning of biologically relevant olfactory cues. As for other brain structures implicated in learning, the principal neurons of the anterior cortical nucleus (ACo) exhibit intrinsic subthreshold membrane potential oscillations in the θ-frequency range. Here we show that nearly 50% of ACo layer II neurons also display electrical resonance, consisting of selective responsiveness to stimuli of a preferential frequency (2–6 Hz). Their impedance profile resembles an electrical band-pass filter with a peak at the preferred frequency, in contrast to the low-pass filter properties of other neurons. Most ACo resonant neurons displayed frequency preference along the whole subthreshold voltage range. We used pharmacological tools to identify the voltage-dependent conductances implicated in resonance. A hyperpolarization-activated cationic current depending on HCN channels underlies resonance at resting and hyperpolarized potentials; notably, this current also participates in resonance at depolarized subthreshold voltages. KV7/KCNQ K+ channels also contribute to resonant behavior at depolarized potentials, but not in all resonant cells. Moreover, resonance was strongly attenuated after blockade of voltage-dependent persistent Na+ channels, suggesting an amplifying role. Remarkably, resonant neurons presented a higher firing probability for stimuli of the preferred frequency. To fully understand the mechanisms underlying resonance in these neurons, we developed a comprehensive conductance-based model including the aforementioned and leak conductances, as well as Hodgkin and Huxley-type channels. The model reproduces the resonant impedance profile and our pharmacological results, allowing a quantitative evaluation of the contribution of each conductance to resonance. It also replicates selective spiking at the resonant frequency and allows a prediction of the temperature-dependent shift in resonance frequency. Our results provide a complete characterization of the resonant behavior of olfactory amygdala neurons and shed light on a putative mechanism for network activity coordination in the intact brain.
Neurons in area V2 and V4 exhibit stimulus specific tuning to single stimuli, and respond at intermediate firing rates when presented with two differentially preferred stimuli (‘pair response’). Selective attention to one of the two stimuli causes the neuron’s firing rate to shift from the intermediate pair response towards the response to the attended stimulus as if it were presented alone. Attention to single stimuli reduces the response threshold of the neuron and increases spike synchronization at gamma frequencies. The intrinsic and network mechanisms underlying these phenomena were investigated in a multi-compartmental biophysical model of a reconstructed cat V4 neuron. Differential stimulus preference was generated through a greater ratio of excitatory to inhibitory synapses projecting from one of two input V2 populations. Feedforward inhibition and synaptic depression dynamics were critical to generating the intermediate pair response. Neuronal gain effects were simulated using gamma frequency range correlations in the feedforward excitatory and inhibitory inputs to the V4 neuron. For single preferred stimulus presentations, correlations within the inhibitory population out of phase with correlations within the excitatory input significantly reduced the response threshold of the V4 neuron. The pair response to simultaneously active preferred and non-preferred V2 populations could also undergo an increase or decrease in gain via the same mechanism, where correlations in feedforward inhibition are out of phase with gamma band correlations within the excitatory input corresponding to the attended stimulus. The results of this model predict that top-down attention may bias the V4 neuron’s response using an inhibitory correlation phase shift mechanism.
selective attention; V4; gain modulation; gamma band synchrony; out of phase inhibition
Local field potential (LFP) oscillations are often accompanied by synchronization of activity within a widespread cerebral area. Thus, the LFP and neuronal coherence appear to be the result of a common mechanism that underlies neuronal assembly formation. We used the olfactory bulb as a model to investigate: (1) the extent to which unitary dynamics and LFP oscillations can be correlated and (2) the precision with which a model of the hypothesized underlying mechanisms can accurately explain the experimental data. For this purpose, we analyzed simultaneous recordings of mitral cell (MC) activity and LFPs in anesthetized and freely breathing rats in response to odorant stimulation. Spike trains were found to be phase-locked to the gamma oscillation at specific firing rates and to form odor-specific temporal patterns. The use of a conductance-based MC model driven by an approximately balanced excitatory-inhibitory input conductance and a relatively small inhibitory conductance that oscillated at the gamma frequency allowed us to provide one explanation of the experimental data via a mode-locking mechanism. This work sheds light on the way network and intrinsic MC properties participate in the locking of MCs to the gamma oscillation in a realistic physiological context and may result in a particular time-locked assembly. Finally, we discuss how a self-synchronization process with such entrainment properties can explain, under experimental conditions: (1) why the gamma bursts emerge transiently with a maximal amplitude position relative to the stimulus time course; (2) why the oscillations are prominent at a specific gamma frequency; and (3) why the oscillation amplitude depends on specific stimulus properties. We also discuss information processing and functional consequences derived from this mechanism.
Olfactory function relies on a chain of neural relays that extends from the periphery to the central nervous system and implies neural activity with various timescales. A central question in neuroscience is how information is encoded by the neural activity. In the mammalian olfactory bulb, local neural activity oscillations in the 40–80 Hz range (gamma) may influence the timing of individual neuron activities such that olfactory information may be encoded in this way. In this study, we first characterize in vivo the detailed activity of individual neurons relative to the oscillation and find that, depending on their state, neurons can exhibit periodic activity patterns. We also find, at least qualitatively, a relation between this activity and a particular odor. This is reminiscent of general physical phenomena—the entrainment by an oscillation—and to verify this hypothesis, in a second phase, we build a biologically realistic model mimicking these in vivo conditions. Our model confirms quantitatively this hypothesis and reveals that entrainment is maximal in the gamma range. Taken together, our results suggest that the neuronal activity may be specifically formatted in time during the gamma oscillation in such a way that it could, at this stage, encode the odor.
High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks.
Neurons in the brain engage in collective oscillations at different frequencies. Gamma and high-gamma oscillations (30–100 Hz and higher) have been associated with cognitive functions, and are altered in psychiatric disorders such as schizophrenia and autism. Our understanding of how high-frequency oscillations are orchestrated in the brain is still limited, but it is necessary for the development of effective clinical approaches to the treatment of these disorders. Some neuron types exhibit dynamical properties that can favour synchronization. The theory of weakly coupled oscillators showed how the phase response of individual neurons can predict the patterns of phase relationships that are observed at the network level. However, neurons in vivo do not behave like regular oscillators, but fire irregularly in a regime dominated by fluctuations. Hence, which intrinsic dynamical properties matter for synchronization, and in which regime, is still an open question. Here, we show how single-cell damped subthreshold oscillations enhance synchrony in interneuronal networks by introducing a depolarizing component, mediated by post-inhibitory rebound, that is correlated among neurons due to common inhibitory input.
This article presents a model of grid cell firing based on the intrinsic persistent firing shown experimentally in neurons of entorhinal cortex. In this model, the mechanism of persistent firing allows individual neurons to hold a stable baseline firing frequency. Depolarizing input from speed modulated head direction cells transiently shifts the frequency of firing from baseline, resulting in a shift in spiking phase in proportion to the integral of velocity. The convergence of input from different persistent firing neurons causes spiking in a grid cell only when the persistent firing neurons are within similar phase ranges. This model effectively simulates the two-dimensional firing of grid cells in open field environments, as well as the properties of theta phase precession. This model provides an alternate implementation of oscillatory interference models. The persistent firing could also interact on a circuit level with rhythmic inhibition and neurons showing membrane potential oscillations to code position with spiking phase. These mechanisms could operate in parallel with computation of position from visual angle and distance of stimuli. In addition to simulating two-dimensional grid patterns, models of phase interference can account for context-dependent firing in other tasks. In network simulations of entorhinal cortex, hippocampus and postsubiculum, the reset of phase effectively replicates context-dependent firing by entorhinal and hippocampal neurons during performance of a continuous spatial alternation task, a delayed spatial alternation task with running in a wheel during the delay period, and a hairpin maze task.
grid cells; place cells; persistent spiking; membrane potential oscillations; theta rhythm; neuromodulation; stellate cells; spatial navigation; entorhinal cortex
Intrinsic plasticity (IP) is a ubiquitous activity-dependent process regulating neuronal excitability and a cellular correlate of behavioral learning and neuronal homeostasis. Because IP is induced rapidly and maintained long-term, it likely represents a major determinant of adaptive collective neuronal dynamics. However, assessing the exact impact of IP has remained elusive. Indeed, it is extremely difficult disentangling the complex non-linear interaction between IP effects, by which conductance changes alter neuronal activity, and IP rules, whereby activity modifies conductance via signaling pathways. Moreover, the two major IP effects on firing rate, threshold and gain modulation, remain unknown in their very mechanisms. Here, using extensive simulations and sensitivity analysis of Hodgkin-Huxley models, we show that threshold and gain modulation are accounted for by maximal conductance plasticity of conductance that situate in two separate domains of the parameter space corresponding to sub- and supra-threshold conductance (i.e. activating below or above the spike onset threshold potential). Analyzing equivalent integrate-and-fire models, we provide formal expressions of sensitivities relating to conductance parameters, unraveling unprecedented mechanisms governing IP effects. Our results generalize to the IP of other conductance parameters and allow strong inference for calcium-gated conductance, yielding a general picture that accounts for a large repertoire of experimental observations. The expressions we provide can be combined with IP rules in rate or spiking models, offering a general framework to systematically assess the computational consequences of IP of pharmacologically identified conductance with both fine grain description and mathematical tractability. We provide an example of such IP loop model addressing the important issue of the homeostatic regulation of spontaneous discharge. Because we do not formulate any assumptions on modification rules, the present theory is also relevant to other neural processes involving excitability changes, such as neuromodulation, development, aging and neural disorders.
Over the past decades, experimental and theoretical studies of the cellular basis of learning and memory have mainly focused on synaptic plasticity, the experience-dependent modification of synapses. However, behavioral learning has also been correlated with experience-dependent changes of non-synaptic voltage-dependent ion channels. This intrinsic plasticity changes the neuron's propensity to fire action potentials in response to synaptic inputs. Thus a fundamental problem is to relate changes of the neuron input-output function with voltage-gated conductance modifications. Using a sensitivity analysis in biophysically realistic models, we depict a generic dichotomy between two classes of voltage-dependent ion channels. These two classes modify the threshold and the slope of the neuron input-output relation, allowing neurons to regulate the range of inputs they respond to and the gain of that response, respectively. We further provide analytical descriptions that enlighten the dynamical mechanisms underlying these effects and propose a concise and realistic framework for assessing the computational impact of intrinsic plasticity in neuron network models. Our results account for a large repertoire of empirical observations and may enlighten functional changes that characterize development, aging and several neural diseases, which also involve changes in voltage-dependent ion channels.
Somatostatin-expressing, low threshold-spiking (LTS) cells and fast-spiking (FS) cells are two common subtypes of inhibitory neocortical interneuron. Excitatory synapses from regular-spiking (RS) pyramidal neurons to LTS cells strongly facilitate when activated repetitively, whereas RS-to-FS synapses depress. This suggests that LTS neurons may be especially relevant at high rate regimes and protect cortical circuits against over-excitation and seizures. However, the inhibitory synapses from LTS cells usually depress, which may reduce their effectiveness at high rates. We ask: by which mechanisms and at what firing rates do LTS neurons control the activity of cortical circuits responding to thalamic input, and how is control by LTS neurons different from that of FS neurons? We study rate models of circuits that include RS cells and LTS and FS inhibitory cells with short-term synaptic plasticity. LTS neurons shift the RS firing-rate vs. current curve to the right at high rates and reduce its slope at low rates; the LTS effect is delayed and prolonged. FS neurons always shift the curve to the right and affect RS firing transiently. In an RS-LTS-FS network, FS neurons reach a quiescent state if they receive weak input, LTS neurons are quiescent if RS neurons receive weak input, and both FS and RS populations are active if they both receive large inputs. In general, FS neurons tend to follow the spiking of RS neurons much more closely than LTS neurons. A novel type of facilitation-induced slow oscillations is observed above the LTS firing threshold with a frequency determined by the time scale of recovery from facilitation. To conclude, contrary to earlier proposals, LTS neurons affect the transient and steady state responses of cortical circuits over a range of firing rates, not only during the high rate regime; LTS neurons protect against over-activation about as well as FS neurons.
The brain consists of circuits of neurons that signal to one another via synapses. There are two classes of neurons: excitatory cells, which cause other neurons to become more active, and inhibitory neurons, which cause other neurons to become less active. It is thought that the activity of excitatory neurons is kept in check largely by inhibitory neurons; when such an inhibitory “brake” fails, a seizure can result. Inhibitory neurons of the low-threshold spiking (LTS) subtype can potentially fulfill this braking, or anticonvulsant, role because the synaptic input to these neurons facilitates, i.e., those neurons are active when excitatory neurons are strongly active. Using a computational model we show that, because the synaptic output of LTS neurons onto excitatory neurons depresses (decreases with activity), the ability of LTS neurons to prevent strong cortical activity and seizures is not qualitatively larger than that of inhibitory neurons of another subtype, the fast-spiking (FS) cells. Furthermore, short-term (∼one second) changes in the strength of synapses to and from LTS interneurons allow them to shape the behavior of cortical circuits even at modest rates of activity, and an RS-LTS-FS circuit is capable of producing slow oscillations, on the time scale of these short-term changes.
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
Understanding the mechanisms underlying distributed pattern formation in brain networks and its content driven dynamical segmentation is an area of intense study. We investigate a theoretical mechanism for selective activation of diverse neural populations that is based on dynamically shifting cellular resonances in functionally or structurally coupled networks. We specifically show that sub-threshold neuronal depolarization from synaptic coupling or external input can shift neurons into and out of resonance with specific bands of existing extracellular oscillations, and this can act as a dynamic readout mechanism during information storage and retrieval. We find that this mechanism is robust and suggest it as a general coding strategy that can be applied to any network with oscillatory nodes.
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent studies demonstrate that spike correlations in recurrent neural
networks are considerably smaller than expected based on the amount of shared
presynaptic input. Here, we explain this observation by means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons.
We show that inhibitory feedback efficiently suppresses pairwise correlations
and, hence, population-rate fluctuations, thereby assigning inhibitory neurons
the new role of active decorrelation. We quantify this decorrelation by
comparing the responses of the intact recurrent network (feedback system) and
systems where the statistics of the feedback channel is perturbed (feedforward
system). Manipulations of the feedback statistics can lead to a significant
increase in the power and coherence of the population response. In particular,
neglecting correlations within the ensemble of feedback channels or between the
external stimulus and the feedback amplifies population-rate fluctuations by
orders of magnitude. The fluctuation suppression in homogeneous inhibitory
networks is explained by a negative feedback loop in the one-dimensional
dynamics of the compound activity. Similarly, a change of coordinates exposes an
effective negative feedback loop in the compound dynamics of stable
excitatory-inhibitory networks. The suppression of input correlations in finite
networks is explained by the population averaged correlations in the linear
network model: In purely inhibitory networks, shared-input correlations are
canceled by negative spike-train correlations. In excitatory-inhibitory
networks, spike-train correlations are typically positive. Here, the suppression
of input correlations is not a result of the mere existence of correlations
between excitatory (E) and inhibitory (I) neurons, but a consequence of a
particular structure of correlations among the three possible pairings (EE, EI,
The spatio-temporal activity pattern generated by a recurrent neuronal network
can provide a rich dynamical basis which allows readout neurons to generate a
variety of responses by tuning the synaptic weights of their inputs. The
repertoire of possible responses and the response reliability become maximal if
the spike trains of individual neurons are uncorrelated. Spike-train
correlations in cortical networks can indeed be very small, even for neighboring
neurons. This seems to be at odds with the finding that neighboring neurons
receive a considerable fraction of inputs from identical presynaptic sources
constituting an inevitable source of correlation. In this article, we show that
inhibitory feedback, abundant in biological neuronal networks, actively
suppresses correlations. The mechanism is generic: It does not depend on the
details of the network nodes and decorrelates networks composed of excitatory
and inhibitory neurons as well as purely inhibitory networks. For the case of
the leaky integrate-and-fire model, we derive the correlation structure
analytically. The new toolbox of formal linearization and a basis transformation
exposing the feedback component is applicable to a range of biological systems.
We confirm our analytical results by direct simulations.
Spatiotemporal pattern formation in neuronal networks depends on the interplay between cellular and network synchronization properties. The neuronal phase response curve (PRC) is an experimentally obtainable measure that characterizes the cellular response to small perturbations, and can serve as an indicator of cellular propensity for synchronization. Two broad classes of PRCs have been identified for neurons: Type I, in which small excitatory perturbations induce only advances in firing, and Type II, in which small excitatory perturbations can induce both advances and delays in firing. Interestingly, neuronal PRCs are usually attenuated with increased spiking frequency, and Type II PRCs typically exhibit a greater attenuation of the phase delay region than of the phase advance region. We found that this phenomenon arises from an interplay between the time constants of active ionic currents and the interspike interval. As a result, excitatory networks consisting of neurons with Type I PRCs responded very differently to frequency modulation compared to excitatory networks composed of neurons with Type II PRCs. Specifically, increased frequency induced a sharp decrease in synchrony of networks of Type II neurons, while frequency increases only minimally affected synchrony in networks of Type I neurons. These results are demonstrated in networks in which both types of neurons were modeled generically with the Morris-Lecar model, as well as in networks consisting of Hodgkin-Huxley-based model cortical pyramidal cells in which simulated effects of acetylcholine changed PRC type. These results are robust to different network structures, synaptic strengths and modes of driving neuronal activity, and they indicate that Type I and Type II excitatory networks may display two distinct modes of processing information.
Synchronization of the firing of neurons in the brain is related to many cognitive functions, such as recognizing faces, discriminating odors, and coordinating movement. It is therefore important to understand what properties of neuronal networks promote synchrony of neural firing. One measure that is often used to determine the contribution of individual neurons to network synchrony is called the phase response curve (PRC). PRCs describe how the timing of neuronal firing changes depending on when input, such as a synaptic signal, is received by the neuron. A characteristic of PRCs that has previously not been well understood is that they change dramatically as the neuron's firing frequency is modulated. This effect carries potential significance, since cognitive functions are often associated with specific frequencies of network activity in the brain. We showed computationally that the frequency dependence of PRCs can be explained by the relative timing of ionic membrane currents with respect to the time between spike firings. Our simulations also showed that the frequency dependence of neuronal PRCs leads to frequency-dependent changes in network synchronization that can be different for different neuron types. These results further our understanding of how synchronization is generated in the brain to support various cognitive functions.
Oscillatory interference models propose a mechanism by which the spatial firing pattern of grid cells can arise from the interaction of multiple oscillators that shift in relative phase. These models produce aspects of the physiological data such as the phase precession dynamics observed in grid cells. However, existing oscillatory interference models did not predict the in-field DC shifts in the membrane potential of grid cells that have been observed during intracellular recordings in navigating animals. Here, we demonstrate that DC shifts can be generated in an oscillatory interference model when half-wave rectified oscillatory inputs are summed by a leaky integrate-and-fire neuron with a long membrane decay constant (100 ms). The non-linear mean of the half-wave rectified input signal is reproduced in the grid cell's membrane potential trace producing the DC shift within field. For shorter values of the decay constant integration is more effective if the input signal, comprising input from 6 head direction selective populations, is temporally spread during in-field epochs; this requires that the head direction selective populations act as velocity controlled oscillators with baseline oscillations that are phase offset from one another. The resulting simulated membrane potential matches several properties of the empirical intracellular recordings, including: in-field DC-shifts, theta-band oscillations, phase precession of both membrane potential oscillations and grid cell spiking activity relative to network theta and a stronger correlation between DC-shift amplitude and firing-rate than between theta-band oscillation amplitude and firing-rate. This work serves to demonstrate that oscillatory interference models can account for the DC shifts in the membrane potential observed during intracellular recordings of grid cells without the need to appeal to attractor dynamics.
grid cells; theta phase precession; oscillatory interference model; leaky-integrate-and-fire neuron; oscillations
In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks – the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2–150 Hz), firing rates of the inputs change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.
spike-timing dependent plasticity; oscillations; phase-locking; stable learning; stability of neuronal plasticity; place fields
The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.
The membrane voltage of neurons in vivo is dominated by noisy “background” fluctuations generated by network-based synaptic activity from nearby cells. It has been speculated that membrane voltage fluctuations in neurons play an important role in scaling the relationship between input amplitude and spike rate response. For this to be true, neuronal spike input-output behavior must be sensitive to physiological membrane voltage fluctuations. Using a combination of single cell recordings and modeling, we investigated the mechanisms through which voltage fluctuations modulate neuronal input-output responses. We find that neurons that express an increase in membrane input resistance with depolarization show low levels of noise-mediated modulation of input-output responses due, in part, to voltage trajectories that suppress the likelihood of generating a spike in response to random current input fluctuations. Hence, non-linear membrane properties arising from certain types of voltage-gated conductances limit noise-based modulation of neuronal input-output responses.
Fundamental properties of phasic firing neurons are usually characterized in a noise-free condition. In the absence of noise, phasic neurons exhibit Class 3 excitability, which is a lack of repetitive firing to steady current injections. For time-varying inputs, phasic neurons are band-pass filters or slope detectors, because they do not respond to inputs containing exclusively low frequencies or shallow slopes. However, we show that in noisy conditions, response properties of phasic neuron models are distinctly altered. Noise enables a phasic model to encode low-frequency inputs that are outside of the response range of the associated deterministic model. Interestingly, this seemingly stochastic-resonance (SR) like effect differs significantly from the classical SR behavior of spiking systems in both the signal-to-noise ratio and the temporal response pattern. Instead of being most sensitive to the peak of a subthreshold signal, as is typical in a classical SR system, phasic models are most sensitive to the signal's rising and falling phases where the slopes are steep. This finding is consistent with the fact that there is not an absolute input threshold in terms of amplitude; rather, a response threshold is more properly defined as a stimulus slope/frequency. We call the encoding of low-frequency signals with noise by phasic models a slope-based SR, because noise can lower or diminish the slope threshold for ramp stimuli. We demonstrate here similar behaviors in three mechanistic models with Class 3 excitability in the presence of slow-varying noise and we suggest that the slope-based SR is a fundamental behavior associated with general phasic properties rather than with a particular biological mechanism.
Principal brain cells, called neurons, show a tremendous amount of diversity in their responses to driving stimuli. A widely present but understudied class of neurons prefers to respond to high-frequency inputs and neglect slow variations; these cells are called phasic neurons. Although phasic neurons do not normally respond to slow signals, we show that noise, a ubiquitous neural input, can enable them to respond to distinct features of slow signals. We emphasize the fact that, in the presence of noise, they are still sensitive to the change in stimulus, rather than to the constant part of the slow inputs, just as they are for fast inputs without noise. This feature distinguishes the response of phasic neurons from those of other neurons, which show more sensitivity to the amplitude of their inputs. We believe that our study has significantly broadened the understanding about the information-processing ability and functional roles of phasic neurons.
We propose a model of the primary auditory cortex (A1), in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex. We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing.
auditory processing; neural networks; synaptic depression; synchronization
Despite high prevalence of anxiety accompanying with chronic pain, the mechanisms underlying pain-related anxiety are largely unknown. With its well-documented role in pain and emotion processing, the amygdala may act as a key player in pathogenesis of neuropathic pain-related anxiety. Pain-related plasticity and sensitization of CeA (central nucleus of the amygdala) neurons have been shown in several models of chronic pain. In addition, firing pattern of neurons with spike output can powerfully affect functional output of the brain nucleus, and GABAergic neurons are crucial in the modulation of neuronal excitability. In this study, we first investigated whether pain-related plasticity (e.g. alteration of neuronal firing patterns) and sensitization of CeA neurons contribute to nerve injury-evoked anxiety in neuropathic rats. Furthermore, we explored whether GABAergic disinhibition is responsible for regulating firing patterns and intrinsic excitabilities of CeA neurons as well as for pain-related anxiety in neuropathic rats.
We discovered that spinal nerve ligation (SNL) produced neuropathic pain-related anxiety-like behaviors in rats, which could be specifically inhibited by intra-CeA administration of anti-anxiety drug diazepam. Moreover, we found potentiated plasticity and sensitization of CeA neurons in SNL-induced anxiety rats, of which including: 1) increased burst firing pattern and early-adapting firing pattern; 2) increased spike frequency and intrinsic excitability; 3) increased amplitude of both after-depolarized-potential (ADP) and sub-threshold membrane potential oscillation. In addition, we observed a remarkable reduction of GABAergic inhibition in CeA neurons in SNL-induced anxiety rats, which was proved to be important for altered firing patterns and hyperexcitability of CeA neurons, thereby greatly contributing to the development of neuropathic pain-related anxiety. Accordantly, activation of GABAergic inhibition by intra-CeA administration of muscimol, a selective GABAA receptors agonist, could inhibit SNL-induced anxiety-like behaviors in neuropathic rats. By contrast, suppression of GABAergic inhibition by intra-CeA administration of bicuculline, a selective GABAA receptors antagonist, produced anxiety-like behavior in normal rats.
This study suggests that reduction of GABAergic inhibition may be responsible for potentiated plasticity and sensitization of CeA neurons, which likely underlie the enhanced output of amygdala and neuropathic pain-related anxiety in SNL rats.
Electronic supplementary material
The online version of this article (doi:10.1186/s13041-014-0072-z) contains supplementary material, which is available to authorized users.
Anxiety; Neuropathic pain; Firing pattern; CeA; GABA
In the olfactory bulb, lateral inhibition mediated by granule cells has been suggested to modulate the timing of mitral cell firing, thereby shaping the representation of input odorants. Current experimental techniques, however, do not enable a clear study of how the mitral-granule cell network sculpts odor inputs to represent odor information spatially and temporally. To address this critical step in the neural basis of odor recognition, we built a biophysical network model of mitral and granule cells, corresponding to 1/100th of the real system in the rat, and used direct experimental imaging data of glomeruli activated by various odors. The model allows the systematic investigation and generation of testable hypotheses of the functional mechanisms underlying odor representation in the olfactory bulb circuit. Specifically, we demonstrate that lateral inhibition emerges within the olfactory bulb network through recurrent dendrodendritic synapses when constrained by a range of balanced excitatory and inhibitory conductances. We find that the spatio-temporal dynamics of lateral inhibition plays a critical role in building the glomerular-related cell clusters observed in experiments, through the modulation of synaptic weights during odor training. Lateral inhibition also mediates the development of sparse and synchronized spiking patterns of mitral cells related to odor inputs within the network, with the frequency of these synchronized spiking patterns also modulated by the sniff cycle.
In the paper we address the role of lateral inhibition in a neuronal network. It is an essential and widespread mechanism of neural processing that has been demonstrated in many brain systems. A key finding that would reveal how and to what extent it can modulate input signals and give rise to some form of perception would involve network-wide recording of individual cells during in vivo behavioral experiments. While this problem has been intensely investigated, it is beyond current methods to record from a reasonable set of cells experimentally to decipher the emergent properties and behavior of the network, leaving the underlying computational and functional roles of lateral inhibition still poorly understood. We addressed this problem using a large-scale model of the olfactory bulb. The model demonstrates how lateral inhibition modulates the evolving dynamics of the olfactory bulb network, generating mitral and granule cell responses that account for critical experimental findings. It also suggests how odor identity can be represented by a combination of temporal and spatial patterns of mitral cell activity, with both feedforward excitation and lateral inhibition via dendrodendritic synapses as the underlying mechanisms facilitating network self-organization and the emergence of synchronized oscillations.
Synchronized oscillation is very commonly observed in many neuronal systems and
might play an important role in the response properties of the system. We have
studied how the spontaneous oscillatory activity affects the responsiveness of a
neuronal network, using a neural network model of the visual cortex built from
Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the
isotropic local E-I and I-E synaptic connections were sufficiently strong, the
network commonly generated gamma frequency oscillatory firing patterns in
response to random feed-forward (FF) input spikes. This spontaneous oscillatory
network activity injects a periodic local current that could amplify a weak
synaptic input and enhance the network's responsiveness. When E-E
connections were added, we found that the strength of oscillation can be
modulated by varying the FF input strength without any changes in single neuron
properties or interneuron connectivity. The response modulation is proportional
to the oscillation strength, which leads to self-regulation such that the
cortical network selectively amplifies various FF inputs according to its
strength, without requiring any adaptation mechanism. We show that this
selective cortical amplification is controlled by E-E cell interactions. We also
found that this response amplification is spatially localized, which suggests
that the responsiveness modulation may also be spatially selective. This
suggests a generalized mechanism by which neural oscillatory activity can
enhance the selectivity of a neural network to FF inputs.
In the nervous system, information is delivered and processed digitally via
voltage spikes transmitted between cells. A neural system is characterized by
its input/output spike signal patterns. Generally, a network of neurons shows a
very different response pattern than that of a single neuron. In some cases, a
neural network generates interesting population activities, such as synchronized
oscillations, which are thought to modulate the response properties of the
network. However, the exact role of these neural oscillations is unknown. We
investigated the relationship between the oscillatory activity and the response
modulation in neural networks using computational simulation modeling. We found
that the response of the system is significantly modified by the oscillations in
the network. In particular, the responsiveness to weak inputs is remarkably
enhanced. This suggests that the oscillation can differentially amplify sensory
information depending on the input signal conditions. We conclude that a neural
network can dynamically modify its response properties by the selective
amplification of sensory signals due to oscillation activity, which may explain
some experimental observations and help us to better understand neural
Recordings from area V4 of monkeys have revealed that when the focus of attention is on a visual stimulus within the receptive field of a cortical neuron, two distinct changes can occur: The firing rate of the neuron can change and there can be an increase in the coherence between spikes and the local field potential (LFP) in the gamma-frequency range (30–50 Hz). The hypothesis explored here is that these observed effects of attention could be a consequence of changes in the synchrony of local interneuron networks. We performed computer simulations of a Hodgkin-Huxley type neuron driven by a constant depolarizing current, I, representing visual stimulation and a modulatory inhibitory input representing the effects of attention via local interneuron networks. We observed that the neuron’s firing rate and the coherence of its output spike train with the synaptic inputs was modulated by the degree of synchrony of the inhibitory inputs. When inhibitory synchrony increased, the coherence of spiking model neurons with the synaptic input increased, but the firing rate either increased or remained the same. The mean number of synchronous inhibitory inputs was a key determinant of the shape of the firing rate versus current (f–I) curves. For a large number of inhibitory inputs (~50), the f–I curve saturated for large I and an increase in input synchrony resulted in a shift of sensitivity—the model neuron responded to weaker inputs I. For a small number (~10), the f–I curves were non-saturating and an increase in input synchrony led to an increase in the gain of the response—the firing rate in response to the same input was multiplied by an approximately constant factor. The firing rate modulation with inhibitory synchrony was highest when the input network oscillated in the gamma frequency range. Thus, the observed changes in firing rate and coherence of neurons in the visual cortex could be controlled by top-down inputs that regulated the coherence in the activity of a local inhibitory network discharging at gamma frequencies.
Selective attention; Synchrony; Noise; Gamma oscillation; Gain modulation; Computer model
Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30–80 Hz gamma rhythm in which network oscillations arise through ‘stochastic synchrony’ capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs.
The gamma rhythm is a prominent, 30–80-Hz EEG signal that is associated with cognition. Several classes of computational models have been posited to explain the gamma rhythm mechanistically. We study a particular class in which the gamma rhythm arises from delayed negative feedback. Our study is unique in that we calibrate the model from direct measurements. We also test the model's most critical predictions directly in experiments that take advantage of cutting-edge computer technologies able to simulate ion channels in real time. Our major findings are that a large amount of “background” synaptic input to neurons is necessary to promote the gamma rhythm; that inhibitory neurons are specially tuned to keep the gamma rhythm stable; that noise has a strong effect on network frequency; and that incoming sensory input can be represented with sensitivity that depends on the strength of excitatory-excitatory synapses and the number of neurons receiving the input. Overall, our results support the hypothesis that the gamma rhythm reflects the presence of delayed feedback that controls overall cortical activity on a cycle-by-cycle basis. Furthermore, its frequency range mainly reflects the timescale of synaptic inhibition, the degree of background activity, and noise levels in the network.
It has been proposed that synchronized neural assemblies in the antennal lobe of insects encode the identity of olfactory stimuli. In response to an odor, some projection neurons exhibit synchronous firing, phase-locked to the oscillations of the field potential, whereas others do not. Experimental data indicate that neural synchronization and field oscillations are induced by fast GABAA-type inhibition, but it remains unclear how desynchronization occurs. We hypothesize that slow inhibition plays a key role in desynchronizing projection neurons. Because synaptic noise is believed to be the dominant factor that limits neuronal reliability, we consider a computational model of the antennal lobe in which a population of oscillatory neurons interact through unreliable GABAA and GABAB inhibitory synapses. From theoretical analysis and extensive computer simulations, we show that transmission failures at slow GABAB synapses make the neural response unpredictable. Depending on the balance between GABAA and GABAB inputs, particular neurons may either synchronize or desynchronize. These findings suggest a wiring scheme that triggers stimulus-specific synchronized assemblies. Inhibitory connections are set by Hebbian learning and selectively activated by stimulus patterns to form a spiking associative memory whose storage capacity is comparable to that of classical binary-coded models. We conclude that fast inhibition acts in concert with slow inhibition to reformat the glomerular input into odor-specific synchronized neural assemblies.
A fundamental question in computational neuroscience is to understand how interactions between neurons underlie sensory coding and information storage. In the first relay of the insect olfactory system, odorant stimuli trigger synchronized activities in neuron populations. Synchronized assemblies may arise as a consequence of inhibitory coupling, because they are disrupted when inhibition is pharmacologically blocked. Using computational modelling, we studied the role of inhibitory, noisy interactions in producing stimulus-specific synchrony. So far, experimental data and modelling studies indicate that fast inhibition induces neural synchrony, but it remains unclear how desynchronization occurs. From theoretical analysis and computer simulations, we found that slow inhibition plays a key role in desynchronizing neurons. Depending on the balance between fast and slow inhibitory inputs, particular neurons may either synchronize or desynchronize. The complementary roles of the two synaptic time scales in the formation of neural assemblies suggest a wiring scheme that produces stimulus-specific inhibitory interactions and endows inhibitory sub-circuits with properties of binary memories.
Functional magnetic resonance imaging (fMRI), with blood oxygenation level-dependent (BOLD) contrast, is a widely used technique for studying the human brain. However, it is an indirect measure of underlying neuronal activity and the processes that link this activity to BOLD signals are still a topic of much debate. In order to relate findings from fMRI research to other measures of neuronal activity it is vital to understand the underlying neurovascular coupling mechanism. Currently, there is no consensus on the relative roles of synaptic and spiking activity in the generation of the BOLD response. Here we designed a modelling framework to investigate different neurovascular coupling mechanisms. We use Electroencephalographic (EEG) and fMRI data from a visual stimulation task together with biophysically informed mathematical models describing how neuronal activity generates the BOLD signals. These models allow us to non-invasively infer the degree of local synaptic and spiking activity in the healthy human brain. In addition, we use Bayesian model comparison to decide between neurovascular coupling mechanisms. We show that the BOLD signal is dependent upon both the synaptic and spiking activity but that the relative contributions of these two inputs are dependent upon the underlying neuronal firing rate. When the underlying neuronal firing is low then the BOLD response is best explained by synaptic activity. However, when the neuronal firing rate is high then both synaptic and spiking activity are required to explain the BOLD signal.
Functional magnetic resonance imaging (fMRI), with blood oxygenation level-dependent (BOLD) contrast, is a widely used technique for studying the human brain. However, the relationship between neuronal activity and blood flow, the basis of fMRI, is still under much debate. A growing body of evidence from animal studies suggests that fMRI signals are more closely coupled to synaptic input activity than to the spiking output of a neuronal population. However, data from neurosurgical patients does not seem to support this view and this hypothesis hasn't yet been tested in the healthy human brain. Here we design a powerful and efficient modelling framework that can be used to non-invasively compare different biologically plausible hypotheses of neurovascular coupling. We use this framework to explore the contribution of these two aspects of neuronal activity (synaptic and spiking) to the generation of hemodynamic signals in human visual cortex, with Electroencephalographic (EEG)-fMRI data. Our results provide preliminary evidence that depending on the frequency of the visual stimulus and underlying firing rate, fMRI relates closer to synaptic activity (low-frequencies) or to both synaptic and spiking activities (high-frequencies).
A typical Go/No-Go decision is suggested to be implemented in the brain via the activation of the direct or indirect pathway in the basal ganglia. Medium spiny neurons (MSNs) in the striatum, receiving input from cortex and projecting to the direct and indirect pathways express D1 and D2 type dopamine receptors, respectively. Recently, it has become clear that the two types of MSNs markedly differ in their mutual and recurrent connectivities as well as feedforward inhibition from FSIs. Therefore, to understand striatal function in action selection, it is of key importance to identify the role of the distinct connectivities within and between the two types of MSNs on the balance of their activity. Here, we used both a reduced firing rate model and numerical simulations of a spiking network model of the striatum to analyze the dynamic balance of spiking activities in D1 and D2 MSNs. We show that the asymmetric connectivity of the two types of MSNs renders the striatum into a threshold device, indicating the state of cortical input rates and correlations by the relative activity rates of D1 and D2 MSNs. Next, we describe how this striatal threshold can be effectively modulated by the activity of fast spiking interneurons, by the dopamine level, and by the activity of the GPe via pallidostriatal backprojections. We show that multiple mechanisms exist in the basal ganglia for biasing striatal output in favour of either the `Go' or the `No-Go' pathway. This new understanding of striatal network dynamics provides novel insights into the putative role of the striatum in various behavioral deficits in patients with Parkinson's disease, including increased reaction times, L-Dopa-induced dyskinesia, and deep brain stimulation-induced impulsivity.
The basal ganglia (BG) play a crucial role in a variety of cognitive and motor functions. BG dysfunction leads to brain disorders such as Parkinson’s disease. At the main input stage of the BG, the striatum, two competing pathways originate. Neurons projecting on these pathways either express D1 or D2 type dopamine receptors. Because activity of D1 or D2 neurons facilitate go or no-go type decision, it is important to study the balance of the D1 and D2 neuron activity. Contrary to the common assumption thus far, recent data shows an asymmetry in striatal circuit with D1 receiving higher inhibition from D2 and fast-spiking neurons. Here, we studied the functional implications of the asymmetric connectivity between D1 and D2 neurons. Our analysis and simulations show that the asymmetric connectivity between these neurons gives rise to a decision transition threshold (DTT), as a consequence D1 (D2) neurons have higher firing rate at lower (higher) average cortical firing rates. Importantly, DTT can be modulated by input correlations, local connectivity, feedforward inhibition and dopamine. Our results suggest that abnormal changes in the DTT could be a plausible mechanism underlying the cognitive and motor deficits associated with brain diseases involving BG malfunction.