Neurons in area V2 and V4 exhibit stimulus specific tuning to single stimuli, and respond at intermediate firing rates when presented with two differentially preferred stimuli (‘pair response’). Selective attention to one of the two stimuli causes the neuron’s firing rate to shift from the intermediate pair response towards the response to the attended stimulus as if it were presented alone. Attention to single stimuli reduces the response threshold of the neuron and increases spike synchronization at gamma frequencies. The intrinsic and network mechanisms underlying these phenomena were investigated in a multi-compartmental biophysical model of a reconstructed cat V4 neuron. Differential stimulus preference was generated through a greater ratio of excitatory to inhibitory synapses projecting from one of two input V2 populations. Feedforward inhibition and synaptic depression dynamics were critical to generating the intermediate pair response. Neuronal gain effects were simulated using gamma frequency range correlations in the feedforward excitatory and inhibitory inputs to the V4 neuron. For single preferred stimulus presentations, correlations within the inhibitory population out of phase with correlations within the excitatory input significantly reduced the response threshold of the V4 neuron. The pair response to simultaneously active preferred and non-preferred V2 populations could also undergo an increase or decrease in gain via the same mechanism, where correlations in feedforward inhibition are out of phase with gamma band correlations within the excitatory input corresponding to the attended stimulus. The results of this model predict that top-down attention may bias the V4 neuron’s response using an inhibitory correlation phase shift mechanism.
selective attention; V4; gain modulation; gamma band synchrony; out of phase inhibition
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
The cortical amygdala receives direct olfactory inputs and is thought to participate in processing and learning of biologically relevant olfactory cues. As for other brain structures implicated in learning, the principal neurons of the anterior cortical nucleus (ACo) exhibit intrinsic subthreshold membrane potential oscillations in the θ-frequency range. Here we show that nearly 50% of ACo layer II neurons also display electrical resonance, consisting of selective responsiveness to stimuli of a preferential frequency (2–6 Hz). Their impedance profile resembles an electrical band-pass filter with a peak at the preferred frequency, in contrast to the low-pass filter properties of other neurons. Most ACo resonant neurons displayed frequency preference along the whole subthreshold voltage range. We used pharmacological tools to identify the voltage-dependent conductances implicated in resonance. A hyperpolarization-activated cationic current depending on HCN channels underlies resonance at resting and hyperpolarized potentials; notably, this current also participates in resonance at depolarized subthreshold voltages. KV7/KCNQ K+ channels also contribute to resonant behavior at depolarized potentials, but not in all resonant cells. Moreover, resonance was strongly attenuated after blockade of voltage-dependent persistent Na+ channels, suggesting an amplifying role. Remarkably, resonant neurons presented a higher firing probability for stimuli of the preferred frequency. To fully understand the mechanisms underlying resonance in these neurons, we developed a comprehensive conductance-based model including the aforementioned and leak conductances, as well as Hodgkin and Huxley-type channels. The model reproduces the resonant impedance profile and our pharmacological results, allowing a quantitative evaluation of the contribution of each conductance to resonance. It also replicates selective spiking at the resonant frequency and allows a prediction of the temperature-dependent shift in resonance frequency. Our results provide a complete characterization of the resonant behavior of olfactory amygdala neurons and shed light on a putative mechanism for network activity coordination in the intact brain.
Local field potential (LFP) oscillations are often accompanied by synchronization of activity within a widespread cerebral area. Thus, the LFP and neuronal coherence appear to be the result of a common mechanism that underlies neuronal assembly formation. We used the olfactory bulb as a model to investigate: (1) the extent to which unitary dynamics and LFP oscillations can be correlated and (2) the precision with which a model of the hypothesized underlying mechanisms can accurately explain the experimental data. For this purpose, we analyzed simultaneous recordings of mitral cell (MC) activity and LFPs in anesthetized and freely breathing rats in response to odorant stimulation. Spike trains were found to be phase-locked to the gamma oscillation at specific firing rates and to form odor-specific temporal patterns. The use of a conductance-based MC model driven by an approximately balanced excitatory-inhibitory input conductance and a relatively small inhibitory conductance that oscillated at the gamma frequency allowed us to provide one explanation of the experimental data via a mode-locking mechanism. This work sheds light on the way network and intrinsic MC properties participate in the locking of MCs to the gamma oscillation in a realistic physiological context and may result in a particular time-locked assembly. Finally, we discuss how a self-synchronization process with such entrainment properties can explain, under experimental conditions: (1) why the gamma bursts emerge transiently with a maximal amplitude position relative to the stimulus time course; (2) why the oscillations are prominent at a specific gamma frequency; and (3) why the oscillation amplitude depends on specific stimulus properties. We also discuss information processing and functional consequences derived from this mechanism.
Olfactory function relies on a chain of neural relays that extends from the periphery to the central nervous system and implies neural activity with various timescales. A central question in neuroscience is how information is encoded by the neural activity. In the mammalian olfactory bulb, local neural activity oscillations in the 40–80 Hz range (gamma) may influence the timing of individual neuron activities such that olfactory information may be encoded in this way. In this study, we first characterize in vivo the detailed activity of individual neurons relative to the oscillation and find that, depending on their state, neurons can exhibit periodic activity patterns. We also find, at least qualitatively, a relation between this activity and a particular odor. This is reminiscent of general physical phenomena—the entrainment by an oscillation—and to verify this hypothesis, in a second phase, we build a biologically realistic model mimicking these in vivo conditions. Our model confirms quantitatively this hypothesis and reveals that entrainment is maximal in the gamma range. Taken together, our results suggest that the neuronal activity may be specifically formatted in time during the gamma oscillation in such a way that it could, at this stage, encode the odor.
High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks.
Neurons in the brain engage in collective oscillations at different frequencies. Gamma and high-gamma oscillations (30–100 Hz and higher) have been associated with cognitive functions, and are altered in psychiatric disorders such as schizophrenia and autism. Our understanding of how high-frequency oscillations are orchestrated in the brain is still limited, but it is necessary for the development of effective clinical approaches to the treatment of these disorders. Some neuron types exhibit dynamical properties that can favour synchronization. The theory of weakly coupled oscillators showed how the phase response of individual neurons can predict the patterns of phase relationships that are observed at the network level. However, neurons in vivo do not behave like regular oscillators, but fire irregularly in a regime dominated by fluctuations. Hence, which intrinsic dynamical properties matter for synchronization, and in which regime, is still an open question. Here, we show how single-cell damped subthreshold oscillations enhance synchrony in interneuronal networks by introducing a depolarizing component, mediated by post-inhibitory rebound, that is correlated among neurons due to common inhibitory input.
Intrinsic plasticity (IP) is a ubiquitous activity-dependent process regulating neuronal excitability and a cellular correlate of behavioral learning and neuronal homeostasis. Because IP is induced rapidly and maintained long-term, it likely represents a major determinant of adaptive collective neuronal dynamics. However, assessing the exact impact of IP has remained elusive. Indeed, it is extremely difficult disentangling the complex non-linear interaction between IP effects, by which conductance changes alter neuronal activity, and IP rules, whereby activity modifies conductance via signaling pathways. Moreover, the two major IP effects on firing rate, threshold and gain modulation, remain unknown in their very mechanisms. Here, using extensive simulations and sensitivity analysis of Hodgkin-Huxley models, we show that threshold and gain modulation are accounted for by maximal conductance plasticity of conductance that situate in two separate domains of the parameter space corresponding to sub- and supra-threshold conductance (i.e. activating below or above the spike onset threshold potential). Analyzing equivalent integrate-and-fire models, we provide formal expressions of sensitivities relating to conductance parameters, unraveling unprecedented mechanisms governing IP effects. Our results generalize to the IP of other conductance parameters and allow strong inference for calcium-gated conductance, yielding a general picture that accounts for a large repertoire of experimental observations. The expressions we provide can be combined with IP rules in rate or spiking models, offering a general framework to systematically assess the computational consequences of IP of pharmacologically identified conductance with both fine grain description and mathematical tractability. We provide an example of such IP loop model addressing the important issue of the homeostatic regulation of spontaneous discharge. Because we do not formulate any assumptions on modification rules, the present theory is also relevant to other neural processes involving excitability changes, such as neuromodulation, development, aging and neural disorders.
Over the past decades, experimental and theoretical studies of the cellular basis of learning and memory have mainly focused on synaptic plasticity, the experience-dependent modification of synapses. However, behavioral learning has also been correlated with experience-dependent changes of non-synaptic voltage-dependent ion channels. This intrinsic plasticity changes the neuron's propensity to fire action potentials in response to synaptic inputs. Thus a fundamental problem is to relate changes of the neuron input-output function with voltage-gated conductance modifications. Using a sensitivity analysis in biophysically realistic models, we depict a generic dichotomy between two classes of voltage-dependent ion channels. These two classes modify the threshold and the slope of the neuron input-output relation, allowing neurons to regulate the range of inputs they respond to and the gain of that response, respectively. We further provide analytical descriptions that enlighten the dynamical mechanisms underlying these effects and propose a concise and realistic framework for assessing the computational impact of intrinsic plasticity in neuron network models. Our results account for a large repertoire of empirical observations and may enlighten functional changes that characterize development, aging and several neural diseases, which also involve changes in voltage-dependent ion channels.
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent studies demonstrate that spike correlations in recurrent neural
networks are considerably smaller than expected based on the amount of shared
presynaptic input. Here, we explain this observation by means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons.
We show that inhibitory feedback efficiently suppresses pairwise correlations
and, hence, population-rate fluctuations, thereby assigning inhibitory neurons
the new role of active decorrelation. We quantify this decorrelation by
comparing the responses of the intact recurrent network (feedback system) and
systems where the statistics of the feedback channel is perturbed (feedforward
system). Manipulations of the feedback statistics can lead to a significant
increase in the power and coherence of the population response. In particular,
neglecting correlations within the ensemble of feedback channels or between the
external stimulus and the feedback amplifies population-rate fluctuations by
orders of magnitude. The fluctuation suppression in homogeneous inhibitory
networks is explained by a negative feedback loop in the one-dimensional
dynamics of the compound activity. Similarly, a change of coordinates exposes an
effective negative feedback loop in the compound dynamics of stable
excitatory-inhibitory networks. The suppression of input correlations in finite
networks is explained by the population averaged correlations in the linear
network model: In purely inhibitory networks, shared-input correlations are
canceled by negative spike-train correlations. In excitatory-inhibitory
networks, spike-train correlations are typically positive. Here, the suppression
of input correlations is not a result of the mere existence of correlations
between excitatory (E) and inhibitory (I) neurons, but a consequence of a
particular structure of correlations among the three possible pairings (EE, EI,
The spatio-temporal activity pattern generated by a recurrent neuronal network
can provide a rich dynamical basis which allows readout neurons to generate a
variety of responses by tuning the synaptic weights of their inputs. The
repertoire of possible responses and the response reliability become maximal if
the spike trains of individual neurons are uncorrelated. Spike-train
correlations in cortical networks can indeed be very small, even for neighboring
neurons. This seems to be at odds with the finding that neighboring neurons
receive a considerable fraction of inputs from identical presynaptic sources
constituting an inevitable source of correlation. In this article, we show that
inhibitory feedback, abundant in biological neuronal networks, actively
suppresses correlations. The mechanism is generic: It does not depend on the
details of the network nodes and decorrelates networks composed of excitatory
and inhibitory neurons as well as purely inhibitory networks. For the case of
the leaky integrate-and-fire model, we derive the correlation structure
analytically. The new toolbox of formal linearization and a basis transformation
exposing the feedback component is applicable to a range of biological systems.
We confirm our analytical results by direct simulations.
This article presents a model of grid cell firing based on the intrinsic persistent firing shown experimentally in neurons of entorhinal cortex. In this model, the mechanism of persistent firing allows individual neurons to hold a stable baseline firing frequency. Depolarizing input from speed modulated head direction cells transiently shifts the frequency of firing from baseline, resulting in a shift in spiking phase in proportion to the integral of velocity. The convergence of input from different persistent firing neurons causes spiking in a grid cell only when the persistent firing neurons are within similar phase ranges. This model effectively simulates the two-dimensional firing of grid cells in open field environments, as well as the properties of theta phase precession. This model provides an alternate implementation of oscillatory interference models. The persistent firing could also interact on a circuit level with rhythmic inhibition and neurons showing membrane potential oscillations to code position with spiking phase. These mechanisms could operate in parallel with computation of position from visual angle and distance of stimuli. In addition to simulating two-dimensional grid patterns, models of phase interference can account for context-dependent firing in other tasks. In network simulations of entorhinal cortex, hippocampus and postsubiculum, the reset of phase effectively replicates context-dependent firing by entorhinal and hippocampal neurons during performance of a continuous spatial alternation task, a delayed spatial alternation task with running in a wheel during the delay period, and a hairpin maze task.
grid cells; place cells; persistent spiking; membrane potential oscillations; theta rhythm; neuromodulation; stellate cells; spatial navigation; entorhinal cortex
Oscillatory interference models propose a mechanism by which the spatial firing pattern of grid cells can arise from the interaction of multiple oscillators that shift in relative phase. These models produce aspects of the physiological data such as the phase precession dynamics observed in grid cells. However, existing oscillatory interference models did not predict the in-field DC shifts in the membrane potential of grid cells that have been observed during intracellular recordings in navigating animals. Here, we demonstrate that DC shifts can be generated in an oscillatory interference model when half-wave rectified oscillatory inputs are summed by a leaky integrate-and-fire neuron with a long membrane decay constant (100 ms). The non-linear mean of the half-wave rectified input signal is reproduced in the grid cell's membrane potential trace producing the DC shift within field. For shorter values of the decay constant integration is more effective if the input signal, comprising input from 6 head direction selective populations, is temporally spread during in-field epochs; this requires that the head direction selective populations act as velocity controlled oscillators with baseline oscillations that are phase offset from one another. The resulting simulated membrane potential matches several properties of the empirical intracellular recordings, including: in-field DC-shifts, theta-band oscillations, phase precession of both membrane potential oscillations and grid cell spiking activity relative to network theta and a stronger correlation between DC-shift amplitude and firing-rate than between theta-band oscillation amplitude and firing-rate. This work serves to demonstrate that oscillatory interference models can account for the DC shifts in the membrane potential observed during intracellular recordings of grid cells without the need to appeal to attractor dynamics.
grid cells; theta phase precession; oscillatory interference model; leaky-integrate-and-fire neuron; oscillations
Somatostatin-expressing, low threshold-spiking (LTS) cells and fast-spiking (FS) cells are two common subtypes of inhibitory neocortical interneuron. Excitatory synapses from regular-spiking (RS) pyramidal neurons to LTS cells strongly facilitate when activated repetitively, whereas RS-to-FS synapses depress. This suggests that LTS neurons may be especially relevant at high rate regimes and protect cortical circuits against over-excitation and seizures. However, the inhibitory synapses from LTS cells usually depress, which may reduce their effectiveness at high rates. We ask: by which mechanisms and at what firing rates do LTS neurons control the activity of cortical circuits responding to thalamic input, and how is control by LTS neurons different from that of FS neurons? We study rate models of circuits that include RS cells and LTS and FS inhibitory cells with short-term synaptic plasticity. LTS neurons shift the RS firing-rate vs. current curve to the right at high rates and reduce its slope at low rates; the LTS effect is delayed and prolonged. FS neurons always shift the curve to the right and affect RS firing transiently. In an RS-LTS-FS network, FS neurons reach a quiescent state if they receive weak input, LTS neurons are quiescent if RS neurons receive weak input, and both FS and RS populations are active if they both receive large inputs. In general, FS neurons tend to follow the spiking of RS neurons much more closely than LTS neurons. A novel type of facilitation-induced slow oscillations is observed above the LTS firing threshold with a frequency determined by the time scale of recovery from facilitation. To conclude, contrary to earlier proposals, LTS neurons affect the transient and steady state responses of cortical circuits over a range of firing rates, not only during the high rate regime; LTS neurons protect against over-activation about as well as FS neurons.
The brain consists of circuits of neurons that signal to one another via synapses. There are two classes of neurons: excitatory cells, which cause other neurons to become more active, and inhibitory neurons, which cause other neurons to become less active. It is thought that the activity of excitatory neurons is kept in check largely by inhibitory neurons; when such an inhibitory “brake” fails, a seizure can result. Inhibitory neurons of the low-threshold spiking (LTS) subtype can potentially fulfill this braking, or anticonvulsant, role because the synaptic input to these neurons facilitates, i.e., those neurons are active when excitatory neurons are strongly active. Using a computational model we show that, because the synaptic output of LTS neurons onto excitatory neurons depresses (decreases with activity), the ability of LTS neurons to prevent strong cortical activity and seizures is not qualitatively larger than that of inhibitory neurons of another subtype, the fast-spiking (FS) cells. Furthermore, short-term (∼one second) changes in the strength of synapses to and from LTS interneurons allow them to shape the behavior of cortical circuits even at modest rates of activity, and an RS-LTS-FS circuit is capable of producing slow oscillations, on the time scale of these short-term changes.
Network oscillations typically span a limited range of frequency. In pacemaker-driven networks, including many Central Pattern Generators (CPGs), this frequency range is determined by the properties of bursting pacemaker neurons and their synaptic connections; thus, factors that affect the burst frequency of pacemaker neurons should play a role in determining the network frequency. We examine the role of membrane resonance of pacemaker neurons on the network frequency in the crab pyloric CPG. The pyloric oscillations (freq ~1 Hz) are generated by a group of pacemaker neurons: the Anterior Burster (AB) and the Pyloric Dilator (PD). We examine the impedance profiles of the AB and PD neurons in response to sinusoidal current injections with varying frequency and find that both neuron types exhibit membrane resonance, i.e. demonstrate maximal impedance at a given preferred frequency. The membrane resonance frequencies of the AB and PD neurons fall within the range of the pyloric network oscillation frequency. Experiments with pharmacological blockers and computational modeling show that both calcium currents ICa and the hyperpolarization-activated inward current Ih, are important in producing the membrane resonance in these neurons. We then demonstrate that both the membrane resonance frequency of the PD neuron and its supra-threshold bursting frequency can be shifted in the same direction by either DC current injection or by using the dynamic clamp technique to inject artificial conductances for Ih or ICa. Together, these results suggest that membrane resonance of pacemaker neurons can be strongly correlated with the CPG oscillation frequency.
Oscillation; central pattern generator; resonance; stomatogastric; model; Ih
Spatiotemporal pattern formation in neuronal networks depends on the interplay between cellular and network synchronization properties. The neuronal phase response curve (PRC) is an experimentally obtainable measure that characterizes the cellular response to small perturbations, and can serve as an indicator of cellular propensity for synchronization. Two broad classes of PRCs have been identified for neurons: Type I, in which small excitatory perturbations induce only advances in firing, and Type II, in which small excitatory perturbations can induce both advances and delays in firing. Interestingly, neuronal PRCs are usually attenuated with increased spiking frequency, and Type II PRCs typically exhibit a greater attenuation of the phase delay region than of the phase advance region. We found that this phenomenon arises from an interplay between the time constants of active ionic currents and the interspike interval. As a result, excitatory networks consisting of neurons with Type I PRCs responded very differently to frequency modulation compared to excitatory networks composed of neurons with Type II PRCs. Specifically, increased frequency induced a sharp decrease in synchrony of networks of Type II neurons, while frequency increases only minimally affected synchrony in networks of Type I neurons. These results are demonstrated in networks in which both types of neurons were modeled generically with the Morris-Lecar model, as well as in networks consisting of Hodgkin-Huxley-based model cortical pyramidal cells in which simulated effects of acetylcholine changed PRC type. These results are robust to different network structures, synaptic strengths and modes of driving neuronal activity, and they indicate that Type I and Type II excitatory networks may display two distinct modes of processing information.
Synchronization of the firing of neurons in the brain is related to many cognitive functions, such as recognizing faces, discriminating odors, and coordinating movement. It is therefore important to understand what properties of neuronal networks promote synchrony of neural firing. One measure that is often used to determine the contribution of individual neurons to network synchrony is called the phase response curve (PRC). PRCs describe how the timing of neuronal firing changes depending on when input, such as a synaptic signal, is received by the neuron. A characteristic of PRCs that has previously not been well understood is that they change dramatically as the neuron's firing frequency is modulated. This effect carries potential significance, since cognitive functions are often associated with specific frequencies of network activity in the brain. We showed computationally that the frequency dependence of PRCs can be explained by the relative timing of ionic membrane currents with respect to the time between spike firings. Our simulations also showed that the frequency dependence of neuronal PRCs leads to frequency-dependent changes in network synchronization that can be different for different neuron types. These results further our understanding of how synchronization is generated in the brain to support various cognitive functions.
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
Accurate timing of action potentials is required for neurons in auditory brainstem nuclei to encode the frequency and phase of incoming sound stimuli. Many such neurons express “high threshold” Kv3-family channels that are required for firing at high rates (>∼200 Hz). Kv3 channels are expressed in gradients along the medial-lateral tonotopic axis of the nuclei. Numerical simulations of auditory brainstem neurons were used to calculate the input-output relations of ensembles of 1–50 neurons, stimulated at rates between 100–1500 Hz. Individual neurons with different levels of potassium currents differ in their ability to follow specific rates of stimulation but all perform poorly when the stimulus rate is greater than the maximal firing rate of the neurons. The temporal accuracy of the combined synaptic output of an ensemble is, however, enhanced by the presence of gradients in Kv3 channel levels over that measured when neurons express uniform levels of channels. Surprisingly, at high rates of stimulation, temporal accuracy is also enhanced by the occurrence of random spontaneous activity, such as is normally observed in the absence of sound stimulation. For any pattern of stimulation, however, greatest accuracy is observed when, in the presence of spontaneous activity, the levels of potassium conductance in all of the neurons is adjusted to that found in the subset of neurons that respond better than their neighbors. This optimization of response by adjusting the K+ conductance occurs for stimulus patterns containing either single and or multiple frequencies in the phase-locking range. The findings suggest that gradients of channel expression are required for normal auditory processing and that changes in levels of potassium currents across the nuclei, by mechanisms such as protein phosphorylation and rapid changes in channel synthesis, adapt the nuclei to the ongoing auditory environment.
In order to detect the nature and location of a sound stimulus, neurons in the central auditory system have to fire at very high rates with extreme temporal precision. Specifically, they have to be able to follow changes in an auditory stimulus at rates of up to 2000 Hz or more and to lock their action potentials to the stimuli with a precision of only a few microseconds. An individual neuron, however, cannot fire at such high rates, and the intrinsic electrical properties of neurons, such as the relative refractory period that follows each action potential, severely limits accuracy of timing at high rates. The intrinsic excitability of neurons is governed by the potassium channels that they express. It has been found in auditory brainstem nuclei that there exist gradients of these channels such that each neuron typically has a different number of channels than its neighbors. In this study, computational models based on measurements in auditory neurons demonstrate that, in the presence of random spontaneous activity such as is normally observed in auditory neurons, rapid adjustments of levels of potassium current within neurons along the gradient are required to allow the ensemble to transmit accurate timing information. The findings suggest that regulation of potassium channels within gradients is an integral component of auditory processing.
Fundamental properties of phasic firing neurons are usually characterized in a noise-free condition. In the absence of noise, phasic neurons exhibit Class 3 excitability, which is a lack of repetitive firing to steady current injections. For time-varying inputs, phasic neurons are band-pass filters or slope detectors, because they do not respond to inputs containing exclusively low frequencies or shallow slopes. However, we show that in noisy conditions, response properties of phasic neuron models are distinctly altered. Noise enables a phasic model to encode low-frequency inputs that are outside of the response range of the associated deterministic model. Interestingly, this seemingly stochastic-resonance (SR) like effect differs significantly from the classical SR behavior of spiking systems in both the signal-to-noise ratio and the temporal response pattern. Instead of being most sensitive to the peak of a subthreshold signal, as is typical in a classical SR system, phasic models are most sensitive to the signal's rising and falling phases where the slopes are steep. This finding is consistent with the fact that there is not an absolute input threshold in terms of amplitude; rather, a response threshold is more properly defined as a stimulus slope/frequency. We call the encoding of low-frequency signals with noise by phasic models a slope-based SR, because noise can lower or diminish the slope threshold for ramp stimuli. We demonstrate here similar behaviors in three mechanistic models with Class 3 excitability in the presence of slow-varying noise and we suggest that the slope-based SR is a fundamental behavior associated with general phasic properties rather than with a particular biological mechanism.
Principal brain cells, called neurons, show a tremendous amount of diversity in their responses to driving stimuli. A widely present but understudied class of neurons prefers to respond to high-frequency inputs and neglect slow variations; these cells are called phasic neurons. Although phasic neurons do not normally respond to slow signals, we show that noise, a ubiquitous neural input, can enable them to respond to distinct features of slow signals. We emphasize the fact that, in the presence of noise, they are still sensitive to the change in stimulus, rather than to the constant part of the slow inputs, just as they are for fast inputs without noise. This feature distinguishes the response of phasic neurons from those of other neurons, which show more sensitivity to the amplitude of their inputs. We believe that our study has significantly broadened the understanding about the information-processing ability and functional roles of phasic neurons.
The responses of neurons in sensory cortex depend on the summation of excitatory and inhibitory synaptic inputs. How the excitatory and inhibitory inputs scale with stimulus depends on the network architecture, which ranges from the lateral inhibitory configuration where excitatory inputs are more narrowly tuned than inhibitory inputs, to the co-tuned configuration where both are tuned equally. The underlying circuitry that gives rise to lateral inhibition and co-tuning is yet unclear. Using large-scale network simulations with experimentally determined connectivity patterns and simulations with rate models, we show that the spatial extent of the input determined the configuration: there was a smooth transition from lateral inhibition with narrow input to co-tuning with broad input. The transition from lateral inhibition to co-tuning was accompanied by shifts in overall gain (reduced), output firing pattern (from tonic to phasic) and rate-level functions (from non-monotonic to monotonically increasing). The results suggest that a single cortical network architecture could account for the extended range of experimentally observed response types between the extremes of lateral inhibitory versus co-tuned configurations.
The cerebral cortex contains a network of electrically active cells (neurons) interconnected by synapses, which may be excitatory (tending to increase activity) or inhibitory. Network activity, i.e., the ensemble of activity patterns of the individual cells, is driven by input from the sense organs, and creates an internal representation of features of the outside world. In auditory cortex, sound frequency (pitch) is encoded by the physical location of activity in the network. Thus, connections among cells at various distances may blur or sharpen the frequency representation. Recent work in living animals has yielded conflicting results: sharpening of responses via lateral inhibition in some cases, versus balanced excitation and inhibition (co-tuning) in others. It was previously unknown whether a single cortical network architecture could account for this spectrum of findings. Here, computer simulations based on experimental data reveal that this is indeed the case. Varying input to the network causes smooth transitions between lateral inhibition and co-tuning, accompanied by changes in the strength and timing of the responses. Diverse input-dependent response patterns in a single network may be a general mechanism enabling the brain to process a wide range of sensory information under various conditions.
Neurons in the ventral cochlear nucleus (VCN) that respond primarily at the onset of a pure tone stimulus show diversity in terms of peri-stimulus-time-histograms (PSTHs), rate-level functions, frequency tuning, and also their responses to broad band noise. A number of different mechanisms have been proposed as contributing to the onset characteristic: e.g. coincidence, depolarisation block, and low-threshold potassium currents. We show that a simple point neuron receiving convergent inputs from high-spontaneous rate auditory nerve (AN) fibers, with no special currents and no peri-stimulatory shifts in firing threshold, is sufficient to produce much of the diversity seen experimentally. Three sub-classes of onset PSTHs: onset-ideal (OI), onset-chopper (OC) and onset-locker (OL) are reproduced by variations in innervation patterns and dendritic filtering. The factors shaping responses were explored by systematically varying key parameters. An OI response in this model requires a narrow range of AN input best frequencies (BF) which only produce supra-threshold depolarizations during the stimulus onset. For OC and OL responses, receptive fields were wider. Considerable low pass filtering of AN inputs away from BF results in an OL, whilst relatively unfiltered inputs produce an OC response. Rate-level functions in response to pure tones can be sloping, or plateau. These can be also reproduced in the model by the manipulation of the AN innervation. The model supports the coincidence detection hypothesis, and suggests that differences in excitatory innervation and dendritic filtering constant are important factors to consider when accounting for the variation in response characteristics seen in VCN onset units.
Onset; Stellate; Cochlear nucleus; Point-neuron; PSTH; Rate-level functions
Despite high prevalence of anxiety accompanying with chronic pain, the mechanisms underlying pain-related anxiety are largely unknown. With its well-documented role in pain and emotion processing, the amygdala may act as a key player in pathogenesis of neuropathic pain-related anxiety. Pain-related plasticity and sensitization of CeA (central nucleus of the amygdala) neurons have been shown in several models of chronic pain. In addition, firing pattern of neurons with spike output can powerfully affect functional output of the brain nucleus, and GABAergic neurons are crucial in the modulation of neuronal excitability. In this study, we first investigated whether pain-related plasticity (e.g. alteration of neuronal firing patterns) and sensitization of CeA neurons contribute to nerve injury-evoked anxiety in neuropathic rats. Furthermore, we explored whether GABAergic disinhibition is responsible for regulating firing patterns and intrinsic excitabilities of CeA neurons as well as for pain-related anxiety in neuropathic rats.
We discovered that spinal nerve ligation (SNL) produced neuropathic pain-related anxiety-like behaviors in rats, which could be specifically inhibited by intra-CeA administration of anti-anxiety drug diazepam. Moreover, we found potentiated plasticity and sensitization of CeA neurons in SNL-induced anxiety rats, of which including: 1) increased burst firing pattern and early-adapting firing pattern; 2) increased spike frequency and intrinsic excitability; 3) increased amplitude of both after-depolarized-potential (ADP) and sub-threshold membrane potential oscillation. In addition, we observed a remarkable reduction of GABAergic inhibition in CeA neurons in SNL-induced anxiety rats, which was proved to be important for altered firing patterns and hyperexcitability of CeA neurons, thereby greatly contributing to the development of neuropathic pain-related anxiety. Accordantly, activation of GABAergic inhibition by intra-CeA administration of muscimol, a selective GABAA receptors agonist, could inhibit SNL-induced anxiety-like behaviors in neuropathic rats. By contrast, suppression of GABAergic inhibition by intra-CeA administration of bicuculline, a selective GABAA receptors antagonist, produced anxiety-like behavior in normal rats.
This study suggests that reduction of GABAergic inhibition may be responsible for potentiated plasticity and sensitization of CeA neurons, which likely underlie the enhanced output of amygdala and neuropathic pain-related anxiety in SNL rats.
Electronic supplementary material
The online version of this article (doi:10.1186/s13041-014-0072-z) contains supplementary material, which is available to authorized users.
Anxiety; Neuropathic pain; Firing pattern; CeA; GABA
In the olfactory bulb, lateral inhibition mediated by granule cells has been suggested to modulate the timing of mitral cell firing, thereby shaping the representation of input odorants. Current experimental techniques, however, do not enable a clear study of how the mitral-granule cell network sculpts odor inputs to represent odor information spatially and temporally. To address this critical step in the neural basis of odor recognition, we built a biophysical network model of mitral and granule cells, corresponding to 1/100th of the real system in the rat, and used direct experimental imaging data of glomeruli activated by various odors. The model allows the systematic investigation and generation of testable hypotheses of the functional mechanisms underlying odor representation in the olfactory bulb circuit. Specifically, we demonstrate that lateral inhibition emerges within the olfactory bulb network through recurrent dendrodendritic synapses when constrained by a range of balanced excitatory and inhibitory conductances. We find that the spatio-temporal dynamics of lateral inhibition plays a critical role in building the glomerular-related cell clusters observed in experiments, through the modulation of synaptic weights during odor training. Lateral inhibition also mediates the development of sparse and synchronized spiking patterns of mitral cells related to odor inputs within the network, with the frequency of these synchronized spiking patterns also modulated by the sniff cycle.
In the paper we address the role of lateral inhibition in a neuronal network. It is an essential and widespread mechanism of neural processing that has been demonstrated in many brain systems. A key finding that would reveal how and to what extent it can modulate input signals and give rise to some form of perception would involve network-wide recording of individual cells during in vivo behavioral experiments. While this problem has been intensely investigated, it is beyond current methods to record from a reasonable set of cells experimentally to decipher the emergent properties and behavior of the network, leaving the underlying computational and functional roles of lateral inhibition still poorly understood. We addressed this problem using a large-scale model of the olfactory bulb. The model demonstrates how lateral inhibition modulates the evolving dynamics of the olfactory bulb network, generating mitral and granule cell responses that account for critical experimental findings. It also suggests how odor identity can be represented by a combination of temporal and spatial patterns of mitral cell activity, with both feedforward excitation and lateral inhibition via dendrodendritic synapses as the underlying mechanisms facilitating network self-organization and the emergence of synchronized oscillations.
We propose a model of the primary auditory cortex (A1), in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex. We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing.
auditory processing; neural networks; synaptic depression; synchronization
In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks – the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2–150 Hz), firing rates of the inputs change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.
spike-timing dependent plasticity; oscillations; phase-locking; stable learning; stability of neuronal plasticity; place fields
Computational modeling and experimentation in a model system for network dynamics reveal how network phase relationships are temperature-compensated in terms of their underlying synaptic and intrinsic membrane currents.
Most animal species are cold-blooded, and their neuronal circuits must maintain function despite environmental temperature fluctuations. The central pattern generating circuits that produce rhythmic motor patterns depend on the orderly activation of circuit neurons. We describe the effects of temperature on the pyloric rhythm of the stomatogastric ganglion of the crab, Cancer borealis. The pyloric rhythm is a triphasic motor pattern in which the Pyloric Dilator (PD), Lateral Pyloric (LP), and Pyloric (PY) neurons fire in a repeating sequence. While the frequency of the pyloric rhythm increased about 4-fold (Q10∼2.3) as the temperature was shifted from 7°C to 23°C, the phase relationships of the PD, LP, and PY neurons showed almost perfect temperature compensation. The Q10's of the input conductance, synaptic currents, transient outward current (IA), and the hyperpolarization-activated inward current (Ih), all of which help determine the phase of LP neuron activity, ranged from 1.8 to 4. We studied the effects of temperature in >1,000 computational models (with different sets of maximal conductances) of a bursting neuron and the LP neuron. Many bursting models failed to monotonically increase in frequency as temperature increased. Temperature compensation of LP neuron phase was facilitated when model neurons' currents had Q10's close to 2. Together, these data indicate that although diverse sets of maximal conductances may be found in identified neurons across animals, there may be strong evolutionary pressure to restrict the Q10's of the processes that contribute to temperature compensation of neuronal circuits.
The neural circuits that produce behaviors such as walking, chewing, and swimming must be both robust and flexible to changing internal and environmental demands. How then do cold-blooded animals cope with temperature fluctuations when the underlying processes that give rise to circuit performance are themselves temperature-dependent? We exploit the crab stomatogastric ganglion to understand the extent to which circuit features are robust to temperature perturbations. We subjected these circuits to temperature ranges they normally encounter in the wild. Interestingly, while the frequency of activity in the network increased 4-fold over these temperature ranges, the relative timing between neurons in the network—termed phase relationships—remained constant. To understand how temperature compensation of phase might occur, we characterized the temperature dependence (Q10's) of synapses and membrane currents. We used computational models to show that the experimentally measured Q10's can promote phase maintenance. We also showed that many model bursting neurons fail to burst over the entire temperature range and that phase maintenance is promoted by closely restricting the model neurons' Q10's. These results imply that although ion channel numbers can vary between individuals, there may be strong evolutionary pressure that restricts the temperature dependence of the processes that contribute to temperature compensation of neuronal circuits.
Oscillatory activity in neuronal networks correlates with different behavioral states throughout the nervous system, and the frequency-response characteristics of individual neurons are believed to be critical for network oscillations. Recent in vivo studies suggest that neurons experience periods of high membrane conductance, and that action potentials are often driven by membrane-potential fluctuations in the living animal. To investigate the frequency-response characteristics of CA1 pyramidal neurons in the presence of high conductance and voltage fluctuations, we performed dynamic-clamp experiments in rat hippocampal brain slices. We drove neurons with noisy stimuli that included a sinusoidal component ranging, in different trials, from 0.1 to 500 Hz. In subsequent data analysis, we determined action potential phase-locking profiles with respect to background conductance, average firing rate, and frequency of the sinusoidal component. We found that background conductance and firing rate qualitatively change the phase-locking profiles of CA1 pyramidal neurons vs. frequency. In particular, higher average spiking rates promoted band-pass profiles, and the high-conductance state promoted phase-locking at frequencies well above what would be predicted from changes in the membrane time constant. Mechanistically, spike-rate adaptation and frequency resonance in the spike-generating mechanism are implicated in shaping the different phase-locking profiles. Our results demonstrate that CA1 pyramidal cells can actively change their synchronization properties in response to global changes in activity associated with different behavioral states.
How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes.
Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance.
Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs.
Synchronized oscillation is very commonly observed in many neuronal systems and
might play an important role in the response properties of the system. We have
studied how the spontaneous oscillatory activity affects the responsiveness of a
neuronal network, using a neural network model of the visual cortex built from
Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the
isotropic local E-I and I-E synaptic connections were sufficiently strong, the
network commonly generated gamma frequency oscillatory firing patterns in
response to random feed-forward (FF) input spikes. This spontaneous oscillatory
network activity injects a periodic local current that could amplify a weak
synaptic input and enhance the network's responsiveness. When E-E
connections were added, we found that the strength of oscillation can be
modulated by varying the FF input strength without any changes in single neuron
properties or interneuron connectivity. The response modulation is proportional
to the oscillation strength, which leads to self-regulation such that the
cortical network selectively amplifies various FF inputs according to its
strength, without requiring any adaptation mechanism. We show that this
selective cortical amplification is controlled by E-E cell interactions. We also
found that this response amplification is spatially localized, which suggests
that the responsiveness modulation may also be spatially selective. This
suggests a generalized mechanism by which neural oscillatory activity can
enhance the selectivity of a neural network to FF inputs.
In the nervous system, information is delivered and processed digitally via
voltage spikes transmitted between cells. A neural system is characterized by
its input/output spike signal patterns. Generally, a network of neurons shows a
very different response pattern than that of a single neuron. In some cases, a
neural network generates interesting population activities, such as synchronized
oscillations, which are thought to modulate the response properties of the
network. However, the exact role of these neural oscillations is unknown. We
investigated the relationship between the oscillatory activity and the response
modulation in neural networks using computational simulation modeling. We found
that the response of the system is significantly modified by the oscillations in
the network. In particular, the responsiveness to weak inputs is remarkably
enhanced. This suggests that the oscillation can differentially amplify sensory
information depending on the input signal conditions. We conclude that a neural
network can dynamically modify its response properties by the selective
amplification of sensory signals due to oscillation activity, which may explain
some experimental observations and help us to better understand neural