Neurons in area V2 and V4 exhibit stimulus specific tuning to single stimuli, and respond at intermediate firing rates when presented with two differentially preferred stimuli (‘pair response’). Selective attention to one of the two stimuli causes the neuron’s firing rate to shift from the intermediate pair response towards the response to the attended stimulus as if it were presented alone. Attention to single stimuli reduces the response threshold of the neuron and increases spike synchronization at gamma frequencies. The intrinsic and network mechanisms underlying these phenomena were investigated in a multi-compartmental biophysical model of a reconstructed cat V4 neuron. Differential stimulus preference was generated through a greater ratio of excitatory to inhibitory synapses projecting from one of two input V2 populations. Feedforward inhibition and synaptic depression dynamics were critical to generating the intermediate pair response. Neuronal gain effects were simulated using gamma frequency range correlations in the feedforward excitatory and inhibitory inputs to the V4 neuron. For single preferred stimulus presentations, correlations within the inhibitory population out of phase with correlations within the excitatory input significantly reduced the response threshold of the V4 neuron. The pair response to simultaneously active preferred and non-preferred V2 populations could also undergo an increase or decrease in gain via the same mechanism, where correlations in feedforward inhibition are out of phase with gamma band correlations within the excitatory input corresponding to the attended stimulus. The results of this model predict that top-down attention may bias the V4 neuron’s response using an inhibitory correlation phase shift mechanism.
selective attention; V4; gain modulation; gamma band synchrony; out of phase inhibition
It has previously been shown that by using spike-timing-dependent plasticity (STDP), neurons can adapt to the beginning of a repeating spatio-temporal firing pattern in their input. In the present work, we demonstrate that this mechanism can be extended to train recognizers for longer spatio-temporal input signals. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feed-forwardly connected in such a way that both the correct input segment and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. We show that nearest-neighbor STDP (where only the pre-synaptic spike most recent to a post-synaptic one is considered) leads to “nearest-neighbor” chains where connections only form between subsequent states in a chain (similar to classic “synfire chains”). In contrast, “all-to-all spike-timing-dependent plasticity” (where all pre- and post-synaptic spike pairs matter) leads to multiple connections that can span several temporal stages in the chain; these connections respect the temporal order of the neurons. It is also demonstrated that previously learnt individual chains can be “stitched together” by repeatedly presenting them in a fixed order. This way longer sequence recognizers can be formed, and potentially also nested structures. Robustness of recognition with respect to speed variations in the input patterns is shown to depend on rise-times of post-synaptic potentials and the membrane noise. It is argued that the memory capacity of the model is high, but could theoretically be increased using sparse codes.
sequence learning; synfire chains; spiking neurons; spike-timing-dependent plasticity; neural automata
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Neurons in the ventral cochlear nucleus (VCN) that respond primarily at the onset of a pure tone stimulus show diversity in terms of peri-stimulus-time-histograms (PSTHs), rate-level functions, frequency tuning, and also their responses to broad band noise. A number of different mechanisms have been proposed as contributing to the onset characteristic: e.g. coincidence, depolarisation block, and low-threshold potassium currents. We show that a simple point neuron receiving convergent inputs from high-spontaneous rate auditory nerve (AN) fibers, with no special currents and no peri-stimulatory shifts in firing threshold, is sufficient to produce much of the diversity seen experimentally. Three sub-classes of onset PSTHs: onset-ideal (OI), onset-chopper (OC) and onset-locker (OL) are reproduced by variations in innervation patterns and dendritic filtering. The factors shaping responses were explored by systematically varying key parameters. An OI response in this model requires a narrow range of AN input best frequencies (BF) which only produce supra-threshold depolarizations during the stimulus onset. For OC and OL responses, receptive fields were wider. Considerable low pass filtering of AN inputs away from BF results in an OL, whilst relatively unfiltered inputs produce an OC response. Rate-level functions in response to pure tones can be sloping, or plateau. These can be also reproduced in the model by the manipulation of the AN innervation. The model supports the coincidence detection hypothesis, and suggests that differences in excitatory innervation and dendritic filtering constant are important factors to consider when accounting for the variation in response characteristics seen in VCN onset units.
Onset; Stellate; Cochlear nucleus; Point-neuron; PSTH; Rate-level functions
Catfish detect and identify invisible prey by sensing their ultra-weak electric fields with electroreceptors. Any neuron that deals with small-amplitude input has to overcome sensitivity limitations arising from inherent threshold non-linearities in spike-generation mechanisms. Many sensory cells solve this issue with stochastic resonance, in which a moderate amount of intrinsic noise causes irregular spontaneous spiking activity with a probability that is modulated by the input signal. Here we show that catfish electroreceptors have adopted a fundamentally different strategy. Using a reverse correlation technique in which we take spike interval durations into account, we show that the electroreceptors generate a supra-threshold bias current that results in quasi-periodically produced spikes. In this regime stimuli modulate the interval between successive spikes rather than the instantaneous probability for a spike. This alternative for stochastic resonance combines threshold-free sensitivity for weak stimuli with similar sensitivity for excitations and inhibitions based on single interspike intervals.
In simulating realistic neuronal circuitry composed of diverse types of neurons, we need an elemental spiking neuron model that is capable of not only quantitatively reproducing spike times of biological neurons given in vivo-like fluctuating inputs, but also qualitatively representing a variety of firing responses to transient current inputs. Simplistic models based on leaky integrate-and-fire mechanisms have demonstrated the ability to adapt to biological neurons. In particular, the multi-timescale adaptive threshold (MAT) model reproduces and predicts precise spike times of regular-spiking, intrinsic-bursting, and fast-spiking neurons, under any fluctuating current; however, this model is incapable of reproducing such specific firing responses as inhibitory rebound spiking and resonate spiking. In this paper, we augment the MAT model by adding a voltage dependency term to the adaptive threshold so that the model can exhibit the full variety of firing responses to various transient current pulses while maintaining the high adaptability inherent in the original MAT model. Furthermore, with this addition, our model is actually able to better predict spike times. Despite the augmentation, the model has only four free parameters and is implementable in an efficient algorithm for large-scale simulation due to its linearity, serving as an element neuron model in the simulation of realistic neuronal circuitry.
spiking neuron model; predicting spike times; reproducing firing patterns; leaky integrate-and-fire model; adaptive threshold; MAT model; voltage dependency; threshold variability
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent studies demonstrate that spike correlations in recurrent neural
networks are considerably smaller than expected based on the amount of shared
presynaptic input. Here, we explain this observation by means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons.
We show that inhibitory feedback efficiently suppresses pairwise correlations
and, hence, population-rate fluctuations, thereby assigning inhibitory neurons
the new role of active decorrelation. We quantify this decorrelation by
comparing the responses of the intact recurrent network (feedback system) and
systems where the statistics of the feedback channel is perturbed (feedforward
system). Manipulations of the feedback statistics can lead to a significant
increase in the power and coherence of the population response. In particular,
neglecting correlations within the ensemble of feedback channels or between the
external stimulus and the feedback amplifies population-rate fluctuations by
orders of magnitude. The fluctuation suppression in homogeneous inhibitory
networks is explained by a negative feedback loop in the one-dimensional
dynamics of the compound activity. Similarly, a change of coordinates exposes an
effective negative feedback loop in the compound dynamics of stable
excitatory-inhibitory networks. The suppression of input correlations in finite
networks is explained by the population averaged correlations in the linear
network model: In purely inhibitory networks, shared-input correlations are
canceled by negative spike-train correlations. In excitatory-inhibitory
networks, spike-train correlations are typically positive. Here, the suppression
of input correlations is not a result of the mere existence of correlations
between excitatory (E) and inhibitory (I) neurons, but a consequence of a
particular structure of correlations among the three possible pairings (EE, EI,
The spatio-temporal activity pattern generated by a recurrent neuronal network
can provide a rich dynamical basis which allows readout neurons to generate a
variety of responses by tuning the synaptic weights of their inputs. The
repertoire of possible responses and the response reliability become maximal if
the spike trains of individual neurons are uncorrelated. Spike-train
correlations in cortical networks can indeed be very small, even for neighboring
neurons. This seems to be at odds with the finding that neighboring neurons
receive a considerable fraction of inputs from identical presynaptic sources
constituting an inevitable source of correlation. In this article, we show that
inhibitory feedback, abundant in biological neuronal networks, actively
suppresses correlations. The mechanism is generic: It does not depend on the
details of the network nodes and decorrelates networks composed of excitatory
and inhibitory neurons as well as purely inhibitory networks. For the case of
the leaky integrate-and-fire model, we derive the correlation structure
analytically. The new toolbox of formal linearization and a basis transformation
exposing the feedback component is applicable to a range of biological systems.
We confirm our analytical results by direct simulations.
Spatiotemporal patterning of neuronal activity is considered to be an important feature of cognitive processing in the brain as well as pathological brain states, such as seizures. Here, we investigate complex interactions between intrinsic properties of neurons and network structure in the generation of network spatiotemporal patterning in the context of seizure-like synchrony. We show that membrane excitability properties have differential effects on network activity patterning for different network topologies. We consider excitatory networks consisting of neurons with excitability properties varying between type I and type II that exhibit significantly different spike frequency responses to external current stimulation, especially at firing threshold. We find that networks with type II-like neurons show higher synchronization and bursting capacity across a range of network topologies than corresponding networks with type I-like neurons. These differences in activity patterning are persistent across different network sizes, connectivity strengths, magnitudes of random external input, and the addition of inhibitory interneurons to the network, making them highly likely to be relevant to brain function. Furthermore, we show that heterogeneous networks of mixed cell types show emergent dynamical patterns even for very low mixing ratios. Specifically, the addition of a small percentage of type II-like cells into a network of type I-like cells can markedly change the patterning of network activity. These findings suggest that cellular as well as network mechanisms can go hand in hand, leading to the generation of seizure-like discharges, suggesting that a single ictogenic mechanism alone may not be responsible for seizure generation.
network structure; spatiotemporal pattern formation; synchrony; network dynamics; ictogenesis; cellular excitability
The spike activity of cells in some cortical areas has been found to be correlated with reaction times and behavioral responses during two-choice decision tasks. These experimental findings have motivated the study of biologically plausible winner-take-all network models, in which strong recurrent excitation and feedback inhibition allow the network to form a categorical choice upon stimulation. Choice formation corresponds in these models to the transition from the spontaneous state of the network to a state where neurons selective for one of the choices fire at a high rate and inhibit the activity of the other neurons. This transition has been traditionally induced by an increase in the external input that destabilizes the spontaneous state of the network and forces its relaxation to a decision state. Here we explore a different mechanism by which the system can undergo such transitions while keeping the spontaneous state stable, based on an escape induced by finite-size noise from the spontaneous state. This decision mechanism naturally arises for low stimulus strengths and leads to exponentially distributed decision times when the amount of noise in the system is small. Furthermore, we show using numerical simulations that mean decision times follow in this regime an exponential dependence on the amplitude of noise. The escape mechanism provides thus a dynamical basis for the wide range and variability of decision times observed experimentally.
Synchronized oscillation is very commonly observed in many neuronal systems and
might play an important role in the response properties of the system. We have
studied how the spontaneous oscillatory activity affects the responsiveness of a
neuronal network, using a neural network model of the visual cortex built from
Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the
isotropic local E-I and I-E synaptic connections were sufficiently strong, the
network commonly generated gamma frequency oscillatory firing patterns in
response to random feed-forward (FF) input spikes. This spontaneous oscillatory
network activity injects a periodic local current that could amplify a weak
synaptic input and enhance the network's responsiveness. When E-E
connections were added, we found that the strength of oscillation can be
modulated by varying the FF input strength without any changes in single neuron
properties or interneuron connectivity. The response modulation is proportional
to the oscillation strength, which leads to self-regulation such that the
cortical network selectively amplifies various FF inputs according to its
strength, without requiring any adaptation mechanism. We show that this
selective cortical amplification is controlled by E-E cell interactions. We also
found that this response amplification is spatially localized, which suggests
that the responsiveness modulation may also be spatially selective. This
suggests a generalized mechanism by which neural oscillatory activity can
enhance the selectivity of a neural network to FF inputs.
In the nervous system, information is delivered and processed digitally via
voltage spikes transmitted between cells. A neural system is characterized by
its input/output spike signal patterns. Generally, a network of neurons shows a
very different response pattern than that of a single neuron. In some cases, a
neural network generates interesting population activities, such as synchronized
oscillations, which are thought to modulate the response properties of the
network. However, the exact role of these neural oscillations is unknown. We
investigated the relationship between the oscillatory activity and the response
modulation in neural networks using computational simulation modeling. We found
that the response of the system is significantly modified by the oscillations in
the network. In particular, the responsiveness to weak inputs is remarkably
enhanced. This suggests that the oscillation can differentially amplify sensory
information depending on the input signal conditions. We conclude that a neural
network can dynamically modify its response properties by the selective
amplification of sensory signals due to oscillation activity, which may explain
some experimental observations and help us to better understand neural
In this work we study the detection of weak stimuli by spiking (integrate-and-fire) neurons in the presence of certain level of noisy background neural activity. Our study has focused in the realistic assumption that the synapses in the network present activity-dependent processes, such as short-term synaptic depression and facilitation. Employing mean-field techniques as well as numerical simulations, we found that there are two possible noise levels which optimize signal transmission. This new finding is in contrast with the classical theory of stochastic resonance which is able to predict only one optimal level of noise. We found that the complex interplay between adaptive neuron threshold and activity-dependent synaptic mechanisms is responsible for this new phenomenology. Our main results are confirmed by employing a more realistic FitzHugh-Nagumo neuron model, which displays threshold variability, as well as by considering more realistic stochastic synaptic models and realistic signals such as poissonian spike trains.
Fundamental properties of phasic firing neurons are usually characterized in a noise-free condition. In the absence of noise, phasic neurons exhibit Class 3 excitability, which is a lack of repetitive firing to steady current injections. For time-varying inputs, phasic neurons are band-pass filters or slope detectors, because they do not respond to inputs containing exclusively low frequencies or shallow slopes. However, we show that in noisy conditions, response properties of phasic neuron models are distinctly altered. Noise enables a phasic model to encode low-frequency inputs that are outside of the response range of the associated deterministic model. Interestingly, this seemingly stochastic-resonance (SR) like effect differs significantly from the classical SR behavior of spiking systems in both the signal-to-noise ratio and the temporal response pattern. Instead of being most sensitive to the peak of a subthreshold signal, as is typical in a classical SR system, phasic models are most sensitive to the signal's rising and falling phases where the slopes are steep. This finding is consistent with the fact that there is not an absolute input threshold in terms of amplitude; rather, a response threshold is more properly defined as a stimulus slope/frequency. We call the encoding of low-frequency signals with noise by phasic models a slope-based SR, because noise can lower or diminish the slope threshold for ramp stimuli. We demonstrate here similar behaviors in three mechanistic models with Class 3 excitability in the presence of slow-varying noise and we suggest that the slope-based SR is a fundamental behavior associated with general phasic properties rather than with a particular biological mechanism.
Principal brain cells, called neurons, show a tremendous amount of diversity in their responses to driving stimuli. A widely present but understudied class of neurons prefers to respond to high-frequency inputs and neglect slow variations; these cells are called phasic neurons. Although phasic neurons do not normally respond to slow signals, we show that noise, a ubiquitous neural input, can enable them to respond to distinct features of slow signals. We emphasize the fact that, in the presence of noise, they are still sensitive to the change in stimulus, rather than to the constant part of the slow inputs, just as they are for fast inputs without noise. This feature distinguishes the response of phasic neurons from those of other neurons, which show more sensitivity to the amplitude of their inputs. We believe that our study has significantly broadened the understanding about the information-processing ability and functional roles of phasic neurons.
Many theories of neural network function assume linear summation. This is in apparent conflict with several known forms of non-linearity in real neurons. Furthermore, key network properties depend on the summation parameters, which are themselves subject to modulation and plasticity in real neurons. We tested summation responses as measured by spiking activity in small groups of CA1 pyramidal neurons using permutations of inputs delivered on an electrode array. We used calcium dye recordings as a readout of the summed spiking response of cell assemblies in the network. Each group consisted of 2–10 cells, and the calcium signal from each cell correlated with individual action potentials. We find that the responses of these small cell groups sum linearly, despite previously reported dendritic non-linearities and the thresholded responses of individual cells. This linear summation persisted when input strengths were reduced. Blockage of inhibition shifted responses up toward saturation, but did not alter the slope of the linear region of summation. Long-term potentiation of synapses in the slice also preserved the linear fit, with an increase in absolute response. However, in this case the summation gain decreased, suggesting a homeostatic process for preserving overall network excitability. Overall, our results suggest that cell groups in the CA3-CA1 network robustly follow a consistent set of linear summation and gain-control rules, notwithstanding the intrinsic non-linearities of individual neurons. Cell-group responses remain linear, with well-defined transformations following inhibitory modulation and plasticity. Our measures of these transformations provide useful parameters to apply to neural network analyses involving modulation and plasticity.
linear summation; network computation; robustness; input–output transformation
Contemporary theory of spiking neuronal networks is based on the linear response of the integrate-and-fire neuron model derived in the diffusion limit. We find that for non-zero synaptic weights, the response to transient inputs differs qualitatively from this approximation. The response is instantaneous rather than exhibiting low-pass characteristics, non-linearly dependent on the input amplitude, asymmetric for excitation and inhibition, and is promoted by a characteristic level of synaptic background noise. We show that at threshold the probability density of the potential drops to zero within the range of one synaptic weight and explain how this shapes the response. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled networks of threshold units.
Our work demonstrates a fast-firing response of nerve cells that remained unconsidered in network analysis, because it is inaccessible by the otherwise successful linear response theory. For the sake of analytic tractability, this theory assumes infinitesimally weak synaptic coupling. However, realistic synaptic impulses cause a measurable deflection of the membrane potential. Here we quantify the effect of this pulse-coupling on the firing rate and the membrane-potential distribution. We demonstrate how the postsynaptic potentials give rise to a fast, non-linear rate transient present for excitatory, but not for inhibitory, inputs. It is particularly pronounced in the presence of a characteristic level of synaptic background noise. We show that feed-forward inhibition enhances the fast response on the network level. This enables a mode of information processing based on short-lived activity transients. Moreover, the non-linear neural response appears on a time scale that critically interacts with spike-timing dependent synaptic plasticity rules. Our results are derived for biologically realistic synaptic amplitudes, but also extend earlier work based on Gaussian white noise. The novel theoretical framework is generically applicable to any threshold unit governed by a stochastic differential equation driven by finite jumps. Therefore, our results are relevant for a wide range of biological, physical, and technical systems.
This article presents a model of grid cell firing based on the intrinsic persistent firing shown experimentally in neurons of entorhinal cortex. In this model, the mechanism of persistent firing allows individual neurons to hold a stable baseline firing frequency. Depolarizing input from speed modulated head direction cells transiently shifts the frequency of firing from baseline, resulting in a shift in spiking phase in proportion to the integral of velocity. The convergence of input from different persistent firing neurons causes spiking in a grid cell only when the persistent firing neurons are within similar phase ranges. This model effectively simulates the two-dimensional firing of grid cells in open field environments, as well as the properties of theta phase precession. This model provides an alternate implementation of oscillatory interference models. The persistent firing could also interact on a circuit level with rhythmic inhibition and neurons showing membrane potential oscillations to code position with spiking phase. These mechanisms could operate in parallel with computation of position from visual angle and distance of stimuli. In addition to simulating two-dimensional grid patterns, models of phase interference can account for context-dependent firing in other tasks. In network simulations of entorhinal cortex, hippocampus and postsubiculum, the reset of phase effectively replicates context-dependent firing by entorhinal and hippocampal neurons during performance of a continuous spatial alternation task, a delayed spatial alternation task with running in a wheel during the delay period, and a hairpin maze task.
grid cells; place cells; persistent spiking; membrane potential oscillations; theta rhythm; neuromodulation; stellate cells; spatial navigation; entorhinal cortex
We constructed a simulated spiking neural network model to investigate the effects of random background stimulation on the dynamics of network activity patterns and tetanus induced network plasticity. The simulated model was a “leaky integrate-and-fire” (LIF) neural model with spike-timing-dependent plasticity (STDP) and frequency-dependent synaptic depression. Spontaneous and evoked activity patterns were compared with those of living neuronal networks cultured on multi-electrode arrays. To help visualize activity patterns and plasticity in our simulated model, we introduced new population measures called Center of Activity (CA) and Center of Weights (CW) to describe the spatio-temporal dynamics of network-wide firing activity and network-wide synaptic strength, respectively. Without random background stimulation, the network synaptic weights were unstable and often drifted after tetanization. In contrast, with random background stimulation, the network synaptic weights remained close to their values immediately after tetanization. The simulation suggests that the effects of tetanization on network synaptic weights were difficult to control because of ongoing synchronized spontaneous bursts of action potentials, or “barrages.” Random background stimulation helped maintain network synaptic stability after tetanization by reducing the number and thus the influence of spontaneous barrages. We used our simulated network to model the interaction between ongoing neural activity, external stimulation and plasticity, and to guide our choice of sensory-motor mappings for adaptive behavior in hybrid neural-robotic systems or “hybrots.”
Cultured neural network; spike-timing-dependent plasticity (STDP); frequency-dependent depression; multi-electrode array (MEA); spatio-temporal dynamics; tetanization; model; plasticity; cortex; bursting; population coding
Spike timing reliability of neuronal responses depends on the frequency content of the input. We investigate how intrinsic properties of cortical neurons affect spike timing reliability in response to rhythmic inputs of suprathreshold mean. Analyzing reliability of conductance-based cortical model neurons on the basis of a correlation measure, we show two aspects of how ionic conductances influence spike timing reliability. First, they set the preferred frequency for spike timing reliability, which in accordance with the resonance effect of spike timing reliability is well approximated by the firing rate of a neuron in response to the DC component in the input. We demonstrate that a slow potassium current can modulate the spike timing frequency preference over a broad range of frequencies. This result is confirmed experimentally by dynamic-clamp recordings from rat prefrontal cortical neurons in vitro. Second, we provide evidence that ionic conductances also influence spike timing beyond changes in preferred frequency. Cells with the same DC firing rate exhibit more reliable spike timing at the preferred frequency and its harmonics if the slow potassium current is larger and its kinetics are faster, whereas a larger persistent sodium current impairs reliability. We predict that potassium channels are an efficient target for neuromodulators that can tune spike timing reliability to a given rhythmic input.
Stimulus properties, attention, and behavioral context influence correlations between the spike times produced by a pair of neurons. However, the biophysical mechanisms that modulate these correlations are poorly understood. With a combined theoretical and experimental approach, we show that the rate of balanced excitatory and inhibitory synaptic input modulates the magnitude and timescale of pairwise spike train correlation. High rate synaptic inputs promote spike time synchrony rather than long timescale spike rate correlations, while low rate synaptic inputs produce opposite results. This correlation shaping is due to a combination of enhanced high frequency input transfer and reduced firing rate gain in the high input rate state compared to the low state. Our study extends neural modulation from single neuron responses to population activity, a necessary step in understanding how the dynamics and processing of neural activity change across distinct brain states.
Neurons in sensory, motor, and cognitive regions of the nervous system integrate synaptic input and output trains of action potentials (spikes). A critical feature of neural computation is the ability for neurons to modulate their spike train response to a given input, allowing task context or past history to affect the flow of information in the brain. The mechanisms that modulate the input-output transfer of single neurons have received significant attention. However, neural computation involves the coordinated activity of populations of neurons, and the mechanisms that modulate the correlation between spike trains from pairs of neurons are relatively unexplored. We show that the level of excitatory and inhibitory input that a neuron receives modulates not only the sensitivity of a single neuron's response to input, but also the magnitude and timescale of correlated spiking activity of pairs of neurons receiving a common synaptic drive. Thus, while modulatory synaptic activity has been traditionally studied from a single neuron perspective, it can also shape the coordinated activity of a population of neurons.
Local field potential (LFP) oscillations are often accompanied by synchronization of activity within a widespread cerebral area. Thus, the LFP and neuronal coherence appear to be the result of a common mechanism that underlies neuronal assembly formation. We used the olfactory bulb as a model to investigate: (1) the extent to which unitary dynamics and LFP oscillations can be correlated and (2) the precision with which a model of the hypothesized underlying mechanisms can accurately explain the experimental data. For this purpose, we analyzed simultaneous recordings of mitral cell (MC) activity and LFPs in anesthetized and freely breathing rats in response to odorant stimulation. Spike trains were found to be phase-locked to the gamma oscillation at specific firing rates and to form odor-specific temporal patterns. The use of a conductance-based MC model driven by an approximately balanced excitatory-inhibitory input conductance and a relatively small inhibitory conductance that oscillated at the gamma frequency allowed us to provide one explanation of the experimental data via a mode-locking mechanism. This work sheds light on the way network and intrinsic MC properties participate in the locking of MCs to the gamma oscillation in a realistic physiological context and may result in a particular time-locked assembly. Finally, we discuss how a self-synchronization process with such entrainment properties can explain, under experimental conditions: (1) why the gamma bursts emerge transiently with a maximal amplitude position relative to the stimulus time course; (2) why the oscillations are prominent at a specific gamma frequency; and (3) why the oscillation amplitude depends on specific stimulus properties. We also discuss information processing and functional consequences derived from this mechanism.
Olfactory function relies on a chain of neural relays that extends from the periphery to the central nervous system and implies neural activity with various timescales. A central question in neuroscience is how information is encoded by the neural activity. In the mammalian olfactory bulb, local neural activity oscillations in the 40–80 Hz range (gamma) may influence the timing of individual neuron activities such that olfactory information may be encoded in this way. In this study, we first characterize in vivo the detailed activity of individual neurons relative to the oscillation and find that, depending on their state, neurons can exhibit periodic activity patterns. We also find, at least qualitatively, a relation between this activity and a particular odor. This is reminiscent of general physical phenomena—the entrainment by an oscillation—and to verify this hypothesis, in a second phase, we build a biologically realistic model mimicking these in vivo conditions. Our model confirms quantitatively this hypothesis and reveals that entrainment is maximal in the gamma range. Taken together, our results suggest that the neuronal activity may be specifically formatted in time during the gamma oscillation in such a way that it could, at this stage, encode the odor.
Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP) generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response. These findings show large differences in information coding between the populations, and apparent functional advantages of asynchronous phasic firing.
Vasopressin is a hormone secreted from specialised brain cells into the bloodstream, acting at the kidneys to control water excretion, and thereby help regulate osmotic pressure. This is a cell membrane property determined by the ratio between body salt and water, and its maintenance is essential to the function of all our cells and organs, which depend on a stable fluid volume and extracellular salt concentration. Specialised cells in the brain sense osmotic pressure and generate electrical signals, which the thousands of vasopressin neurons process and respond to by producing and secreting vasopressin. The individual vasopressin cells generate an interesting phasic pattern of electrical activity in response to rises in osmotic pressure – they fire in long bursts, separated by long silences. In our project we're using modelling to simulate this phasic pattern of electrical activity and how it relates to the input signals, trying to understand exactly why vasopressin cells generate this kind of pattern and exactly what advantages it offers to signal processing and the control of vasopressin secretion.
The sensory areas of the cerebral cortex possess multiple topographic representations of sensory dimensions. Gradient of frequency selectivity (tonotopy) is the dominant organizational feature in the primary auditory cortex, while other feature-based organizations are less well established. We probed the topographic organization of the mouse auditory cortex at the single cell level using in vivo two-photon Ca2+ imaging. Tonotopy was present on a large scale but was fractured on a fine scale. Intensity tuning, important in level-invariant representation, was observed in individual cells but was not topographically organized. The presence or near-absence of putative sub-threshold responses revealed a dichotomy in topographic organization. Inclusion of sub-threshold responses revealed a topographic clustering of neurons with similar response properties, while such clustering was absent in supra-threshold responses. This dichotomy indicates that groups of nearby neurons with locally shared inputs can perform independent parallel computations in ACX.
Activity-dependent dendritic Ca2+ signals play a critical role in multiple forms of non-linear cellular output and plasticity. In thalamocortical neurons, despite the well-established spatial separation of sensory and cortical inputs onto proximal and distal dendrites respectively, little is known about the spatio-temporal dynamics of intrinsic dendritic Ca2+ signalling during the different state-dependent firing patterns that are characteristic of these neurons. Here we demonstrate that T-type Ca2+ channels are expressed throughout the entire dendritic tree of rat thalamocortical neurons and that they mediate regenerative propagation of low threshold spikes, typical of, but not exclusive to sleep states, resulting in global dendritic Ca2+ influx. In contrast, actively backpropagating action potentials, typical of wakefulness, result in smaller Ca2+ influxes that can temporally summate to produce dendritic Ca2+ accumulations which are linearly related to firing frequency but spatially confined to proximal dendritic regions. Furthermore, dendritic Ca2+ transients evoked by both action potentials and low threshold spikes are shaped by Ca2+ uptake by sarco/endoplasmic reticulum Ca2+ ATP-ases, but do not rely upon Ca2+-induced Ca2+ release. Our data demonstrate that thalamocortical neurons are endowed with intrinsic dendritic Ca2+ signalling properties that are spatially and temporally modified in a behavioural state-dependent manner, and suggest that backpropagating action potentials faithfully inform proximal sensory but not distal corticothalamic synapses of neuronal output whereas corticothalamic synapses only “detect” Ca2+ signals associated with low threshold spikes.
thalamocortical; dendrites; T-type calcium channel; low threshold spike; action potential; calcium extrusion
Cortical inhibition plays an important role in shaping neuronal processing. The underlying synaptic mechanisms remain controversial. Here, in vivo whole-cell recordings from neurons in the rat primary auditory cortex revealed that the frequency tuning curve of inhibitory input was broader than that of excitatory input. This results in relatively stronger inhibition in frequency domains flanking the preferred frequencies of the cell and a significant sharpening of the frequency tuning of membrane responses. The less selective inhibition can be attributed to a broader bandwidth and lower threshold of spike tonal receptive field of fast-spike inhibitory neurons than nearby excitatory neurons, although both types of neurons receive similar ranges of excitatory input, and are organized into the same tonotopic map. Thus, the balance between excitation and inhibition is only approximate, and intracortical inhibition with high sensitivity and low selectivity can laterally sharpen the frequency tuning of neurons, ensuring their highly selective representation.
The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks.
structural plasticity; connectivity; Hebbian learning; network; simulation; correlations; STDP; inhibitory plasticity
Subthreshold signal detection is an important task for animal survival in complex environments, where noise increases both the external signal response and the spontaneous spiking of neurons. The mechanism by which neurons process the coding of signals is not well understood. Here, we propose that coincidence detection, one of the ways to describe the functionality of a single neural cell, can improve the reliability and the precision of signal detection through detection of presynaptic input synchrony. Using a simplified neuronal network model composed of dozens of integrate-and-fire neurons and a single coincidence-detector neuron, we show how the network reads out the subthreshold noisy signals reliably and precisely. We find suitable pairing parameters, the threshold and the detection time window of the coincidence-detector neuron, that optimize the precision and reliability of the neuron. Furthermore, it is observed that the refractory period induces an oscillation in the spontaneous firing, but the neuron can inhibit this activity and improve the reliability and precision further. In the case of intermediate intrinsic states of the input neuron, the network responds to the input more efficiently. These results present the critical link between spiking synchrony and noisy signal transfer, which is utilized in coincidence detection, resulting in enhancement of temporally sensitive coding scheme.