Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding.
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity.
Nerve cells, called neurons, are one of the core components of the brain and form complex networks by connecting to other neurons via long, thin ‘wire-like’ processes called axons. Axons can extend across the brain, enabling neurons to form connections—or synapses—with thousands of others. It is through these complex networks that incoming information from sensory organs, such as the eye, is propagated through the brain and encoded.
The basic unit of communication between neurons is the action potential, often called a ‘spike’, which propagates along the network of axons and, through a chemical process at synapses, communicates with the postsynaptic neurons that the axon is connected to. These action potentials excite the neuron that they arrive at, and this excitatory process can generate a new action potential that then propagates along the axon to excite additional target neurons. In the visual areas of the cortex, neurons respond with action potentials when they ‘recognize’ a particular feature in a scene—a process called tuning. How a neuron becomes tuned to certain features in the world and not to others is unclear, as are the rules that enable a neuron to change what it is tuned to. What is clear, however, is that to understand this process is to understand the basis of sensory perception.
Memory storage and formation is thought to occur at synapses. The efficiency of signal transmission between neurons can increase or decrease over time, and this process is often referred to as synaptic plasticity. But for these synaptic changes to be transmitted to target neurons, the changes must alter the number of action potentials. Although it has been shown in vitro that the efficiency of synaptic transmission—that is the strength of the synapse—can be altered by changing the order in which the pre- and postsynaptic cells are activated (referred to as ‘Spike-timing-dependent plasticity’), this has never been shown to have an effect on the number of action potentials generated in a single neuron in vivo. It is therefore unknown whether this process is functionally relevant.
Now Pawlak et al. report that spike-timing-dependent plasticity in the visual cortex of anaesthetized rats can change the spiking of neurons in the visual cortex. They used a visual stimulus (a bar flashed up for half a second) to activate a presynaptic cell, and triggered a single action potential in the postsynaptic cell a very short time later. By repeatedly activating the cells in this way, they increased the strength of the synaptic connection between the two neurons. After a small number of these pairing activations, presenting the visual stimulus alone to the presynaptic cell was enough to trigger an action potential (a suprathreshold response) in the postsynaptic neuron—even though this was not the case prior to the pairing.
This study shows that timing rules known to change the strength of synaptic connections—and proposed to underlie learning and memory—have functional relevance in vivo, and that the timing of single action potentials can change the functional status of a cortical neuron.
synaptic plasticity; STDP; visual cortex; circuits; in vivo; spiking patterns; rat
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods to neural spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order spike correlation organizes dynamically in relation to a behavioral demand.
Nearly half a century ago, the Canadian psychologist D. O. Hebb postulated the formation of assemblies of tightly connected cells in cortical recurrent networks because of changes in synaptic weight (Hebb's learning rule) by repetitive sensory stimulation of the network. Consequently, the activation of such an assembly for processing sensory or behavioral information is likely to be expressed by precisely coordinated spiking activities of the participating neurons. However, the available analysis techniques for multiple parallel neural spike data do not allow us to reveal the detailed structure of transiently active assemblies as indicated by their dynamical pairwise and higher-order spike correlations. Here, we construct a state-space model of dynamic spike interactions, and present a recursive Bayesian method that makes it possible to trace multiple neurons exhibiting such precisely coordinated spiking activities in a time-varying manner. We also formulate a hypothesis test of the underlying dynamic spike correlation, which enables us to detect the assemblies activated in association with behavioral events. Therefore, the proposed method can serve as a useful tool to test Hebb's cell assembly hypothesis.
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities.
The circuits of the brain must perform a daunting amount of functions. But how can “brain states” be flexibly controlled, given that anatomic inter-areal connections can be considered as fixed, on timescales relevant for behavior? We hypothesize that, thanks to the nonlinear interaction between brain rhythms, even a simple circuit involving few brain areas can originate a multitude of effective circuits, associated with alternative functions selectable “on demand”. A distinction is usually made between structural connectivity, which describes actual synaptic connections, and effective connectivity, quantifying, beyond correlation, directed inter-areal causal influences. In our study, we measure effective connectivity based on time-series of neural activity generated by model inter-areal circuits. We find that “causality follows dynamics”. We show indeed that different effective networks correspond to different dynamical states associated to a same structural network (in particular, different phase-locking patterns between local neuronal oscillations). We then find that “information follows causality” (and thus, again, dynamics). We demonstrate that different effective networks give rise to alternative modalities of information routing between brain areas wired together in a fixed structural network. In particular, we show that the self-organization of interacting “analog” rate oscillations control the flow of “digital-like” information encoded in complex spiking patterns.
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Unraveling the general organizing principles of connectivity in neural circuits is a crucial step towards understanding brain function. However, even the simpler task of assessing the global excitatory connectivity of a culture in vitro, where neurons form self-organized networks in absence of external stimuli, remains challenging. Neuronal cultures undergo spontaneous switching between episodes of synchronous bursting and quieter inter-burst periods. We introduce here a novel algorithm which aims at inferring the connectivity of neuronal cultures from calcium fluorescence recordings of their network dynamics. To achieve this goal, we develop a suitable generalization of Transfer Entropy, an information-theoretic measure of causal influences between time series. Unlike previous algorithmic approaches to reconstruction, Transfer Entropy is data-driven and does not rely on specific assumptions about neuronal firing statistics or network topology. We generate simulated calcium signals from networks with controlled ground-truth topology and purely excitatory interactions and show that, by restricting the analysis to inter-bursts periods, Transfer Entropy robustly achieves a good reconstruction performance for disparate network connectivities. Finally, we apply our method to real data and find evidence of non-random features in cultured networks, such as the existence of highly connected hub excitatory neurons and of an elevated (but not extreme) level of clustering.
The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks.
structural plasticity; connectivity; Hebbian learning; network; simulation; correlations; STDP; inhibitory plasticity
Synaptic interactions between neurons of the human cerebral cortex were not directly studied to date. We recorded the first dataset, to our knowledge, on the synaptic effect of identified human pyramidal cells on various types of postsynaptic neurons and reveal complex events triggered by individual action potentials in the human neocortical network. Brain slices were prepared from nonpathological samples of cortex that had to be removed for the surgical treatment of brain areas beneath association cortices of 58 patients aged 18 to 73 y. Simultaneous triple and quadruple whole-cell patch clamp recordings were performed testing mono- and polysynaptic potentials in target neurons following a single action potential fired by layer 2/3 pyramidal cells, and the temporal structure of events and underlying mechanisms were analyzed. In addition to monosynaptic postsynaptic potentials, individual action potentials in presynaptic pyramidal cells initiated long-lasting (37 ± 17 ms) sequences of events in the network lasting an order of magnitude longer than detected previously in other species. These event series were composed of specifically alternating glutamatergic and GABAergic postsynaptic potentials and required selective spike-to-spike coupling from pyramidal cells to GABAergic interneurons producing concomitant inhibitory as well as excitatory feed-forward action of GABA. Single action potentials of human neurons are sufficient to recruit Hebbian-like neuronal assemblies that are proposed to participate in cognitive processes.
We recorded the first connections, to our knowledge, between human nerve cells and reveal that a subset of interactions is so strong that some presynaptic cells are capable of eliciting action potentials in the postsynaptic target neurons. Interestingly, these strong connections selectively link pyramidal cells using the neurotransmitter glutamate to neurons releasing gamma aminobutyric acid (GABA). Moreover, the GABAergic neurons receiving the strong connections include different types: basket cells, which inhibit several target cell populations, and another type called the chandelier cells, which can be excitatory and target pyramidal cells only. Thus, the activation originating from a single pyramidal cell propagates to synchronously working inhibitory and excitatory GABAergic neurons. Inhibition then arrives to various neuron classes, but excitation finds only pyramidal cells, which in turn, can propagate excitation even further in the network of neurons. This chain of events revealed here leads to network activation approximately an order of magnitude longer than detected previously in response to a single action potential in a single neuron. Individual-neuron–activated groups of neurons resemble the so-called functional assemblies that were proposed as building blocks of higher order cognitive representations.
A novel study on connections between human neurons reveals that single spikes in pyramidal cells can activate synchronously timed assemblies through strong connections linking pyramidal cells with inhibitory and excitatory GABAergic neurons.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
The computations that brain circuits can perform depend on their wiring. While a wiring diagram is still out of reach for major brain structures such as the neocortex and hippocampus, data on the overall distribution of synaptic connection strengths and the temporal fluctuations of individual synapses have recently become available. Specifically, there exists a small population of very strong and stable synaptic connections, which may form the physiological substrate of life-long memories. This population coexists with a big and ever changing population of much smaller and strongly fluctuating synaptic connections. So far it has remained unclear how these properties of networks in neocortex and hippocampus arise. Here we present a computational model that explains these fundamental properties of neural circuits as a consequence of network self-organization resulting from the combined action of different forms of neuronal plasticity. This self-organization is driven by a rich-get-richer effect induced by an associative synaptic learning mechanism which is kept in check by several homeostatic plasticity mechanisms stabilizing the network. The model highlights the role of self-organization in the formation of brain circuits and parsimoniously explains a range of recent findings about their fundamental properties.
Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30–80 Hz gamma rhythm in which network oscillations arise through ‘stochastic synchrony’ capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs.
The gamma rhythm is a prominent, 30–80-Hz EEG signal that is associated with cognition. Several classes of computational models have been posited to explain the gamma rhythm mechanistically. We study a particular class in which the gamma rhythm arises from delayed negative feedback. Our study is unique in that we calibrate the model from direct measurements. We also test the model's most critical predictions directly in experiments that take advantage of cutting-edge computer technologies able to simulate ion channels in real time. Our major findings are that a large amount of “background” synaptic input to neurons is necessary to promote the gamma rhythm; that inhibitory neurons are specially tuned to keep the gamma rhythm stable; that noise has a strong effect on network frequency; and that incoming sensory input can be represented with sensitivity that depends on the strength of excitatory-excitatory synapses and the number of neurons receiving the input. Overall, our results support the hypothesis that the gamma rhythm reflects the presence of delayed feedback that controls overall cortical activity on a cycle-by-cycle basis. Furthermore, its frequency range mainly reflects the timescale of synaptic inhibition, the degree of background activity, and noise levels in the network.
Somatostatin-expressing, low threshold-spiking (LTS) cells and fast-spiking (FS) cells are two common subtypes of inhibitory neocortical interneuron. Excitatory synapses from regular-spiking (RS) pyramidal neurons to LTS cells strongly facilitate when activated repetitively, whereas RS-to-FS synapses depress. This suggests that LTS neurons may be especially relevant at high rate regimes and protect cortical circuits against over-excitation and seizures. However, the inhibitory synapses from LTS cells usually depress, which may reduce their effectiveness at high rates. We ask: by which mechanisms and at what firing rates do LTS neurons control the activity of cortical circuits responding to thalamic input, and how is control by LTS neurons different from that of FS neurons? We study rate models of circuits that include RS cells and LTS and FS inhibitory cells with short-term synaptic plasticity. LTS neurons shift the RS firing-rate vs. current curve to the right at high rates and reduce its slope at low rates; the LTS effect is delayed and prolonged. FS neurons always shift the curve to the right and affect RS firing transiently. In an RS-LTS-FS network, FS neurons reach a quiescent state if they receive weak input, LTS neurons are quiescent if RS neurons receive weak input, and both FS and RS populations are active if they both receive large inputs. In general, FS neurons tend to follow the spiking of RS neurons much more closely than LTS neurons. A novel type of facilitation-induced slow oscillations is observed above the LTS firing threshold with a frequency determined by the time scale of recovery from facilitation. To conclude, contrary to earlier proposals, LTS neurons affect the transient and steady state responses of cortical circuits over a range of firing rates, not only during the high rate regime; LTS neurons protect against over-activation about as well as FS neurons.
The brain consists of circuits of neurons that signal to one another via synapses. There are two classes of neurons: excitatory cells, which cause other neurons to become more active, and inhibitory neurons, which cause other neurons to become less active. It is thought that the activity of excitatory neurons is kept in check largely by inhibitory neurons; when such an inhibitory “brake” fails, a seizure can result. Inhibitory neurons of the low-threshold spiking (LTS) subtype can potentially fulfill this braking, or anticonvulsant, role because the synaptic input to these neurons facilitates, i.e., those neurons are active when excitatory neurons are strongly active. Using a computational model we show that, because the synaptic output of LTS neurons onto excitatory neurons depresses (decreases with activity), the ability of LTS neurons to prevent strong cortical activity and seizures is not qualitatively larger than that of inhibitory neurons of another subtype, the fast-spiking (FS) cells. Furthermore, short-term (∼one second) changes in the strength of synapses to and from LTS interneurons allow them to shape the behavior of cortical circuits even at modest rates of activity, and an RS-LTS-FS circuit is capable of producing slow oscillations, on the time scale of these short-term changes.
An activity-dependent long-lasting asynchronous release of GABA from identified fast-spiking inhibitory neurons in the neocortex can impair the reliability and temporal precision of activity in a cortical network.
Networks of specific inhibitory interneurons regulate principal cell firing in several forms of neocortical activity. Fast-spiking (FS) interneurons are potently self-inhibited by GABAergic autaptic transmission, allowing them to precisely control their own firing dynamics and timing. Here we show that in FS interneurons, high-frequency trains of action potentials can generate a delayed and prolonged GABAergic self-inhibition due to sustained asynchronous release at FS-cell autapses. Asynchronous release of GABA is simultaneously recorded in connected pyramidal (P) neurons. Asynchronous and synchronous autaptic release show differential presynaptic Ca2+ sensitivity, suggesting that they rely on different Ca2+ sensors and/or involve distinct pools of vesicles. In addition, asynchronous release is modulated by the endogenous Ca2+ buffer parvalbumin. Functionally, asynchronous release decreases FS-cell spike reliability and reduces the ability of P neurons to integrate incoming stimuli into precise firing. Since each FS cell contacts many P neurons, asynchronous release from a single interneuron may desynchronize a large portion of the local network and disrupt cortical information processing.
In the cerebral cortex (neocortex) of the brain, fast-spiking (FS) inhibitory cells contact many principal pyramidal (P) neurons on their cell bodies, which allows the FS cells to control the generation of action potentials (neuronal output). FS-cell-mediated rhythmic and synchronous inhibition drives coherent network oscillations of large ensembles of P neurons, indicating that FS interneurons are needed for the precise timing of cortical circuits. Interestingly, FS cells are self-innervated by GABAergic autaptic contacts, whose synchronous activation regulates FS-cell precise firing. Here we report that high-frequency firing in FS interneurons results in a massive (>10-fold), delayed, and prolonged (for seconds) increase in inhibitory events, occurring at both autaptic (FS–FS) and synaptic (FS–P) sites. This increased inhibition is due to asynchronous release of GABA from presynaptic FS cells. Delayed and disorganized asynchronous inhibitory responses significantly affected the input–output properties of both FS and P neurons, suggesting that asynchronous release of GABA might promote network desynchronization. FS interneurons can fire at high frequency (>100 Hz) in vitro and in vivo, and are known for their reliable and precise signaling. Our results show an unprecedented action of these cells, by which their tight temporal control of cortical circuits can be broken when they are driven to fire above certain frequencies.
There are two distinct inhibitory GABAergic circuits in the neostriatum. The feedforward circuit consists of a relatively small population of GABAergic interneurons that receives excitatory input from the neocortex and exerts monosynaptic inhibition onto striatal spiny projection neurons. The feedback circuit comprises the numerous spiny projection neurons and their interconnections via local axon collaterals. This network has long been assumed to provide the majority of striatal GABAergic inhibition and to sharpen and shape striatal output through lateral inhibition, producing increased activity in the most strongly excited spiny cells at the expense of their less strongly excited neighbors.
Recent results, mostly from recording experiments of synaptically connected pairs of neurons, have revealed that the two GABAergic circuits differ markedly in terms of the total number of synapses made by each, the strength of the postsynaptic response detected at the soma, the extent of presynaptic convergence and divergence and the net effect of the activation of each circuit on the postsynaptic activity of the spiny neuron. These data have revealed that the feedforward inhibition is powerful and widespread, with spiking in a single interneuron being capable of significantly delaying or even blocking the generation of spikes in a large number of postsynaptic spiny neurons. In contrast, the postsynaptic effects of spiking in a single presynaptic spiny neuron on postsynaptic spiny neurons are weak when measured at the soma, and unable to significantly affect spike timing or generation. Further, reciprocity of synaptic connections between spiny neurons is only rarely observed.
These results suggest that the bulk of the fast inhibition that has the strongest effects on spiny neuron spike timing comes from the feedforward interneuronal system whereas the axon collateral feedback system acts principally at the dendrites to control local excitability as well as the overall level of activity of the spiny neuron.
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.
To appreciate how neural circuits control behaviors, we must understand two things. First, how the neurons comprising the circuit are connected, and second, how neurons and their connections change after learning or in response to neuromodulators. Neuronal connectivity is difficult to determine experimentally, whereas neuronal activity can often be readily measured. We describe a statistical model to estimate circuit connectivity directly from measured activity patterns. We use the timing relationships between observed spikes to predict synaptic interactions between simultaneously observed neurons. The model estimate provides each predicted connection with a curve that represents how strongly, and at which temporal delays, one circuit element effectively influences another. These curves are analogous to synaptic interactions of the level of the membrane potential of biological neurons and share some of their features such as being inhibitory or excitatory. We test our method on recordings from the pyloric circuit in the crab stomatogastric ganglion, a small circuit whose connectivity is completely known beforehand, and find that the predicted circuit matches the biological one — a result other techniques failed to achieve. In addition, we show that drug manipulations impacting the circuit are revealed by this technique. These results illustrate the utility of our analysis approach for inferring connections from neural spiking activity.
Synchronized oscillation is very commonly observed in many neuronal systems and
might play an important role in the response properties of the system. We have
studied how the spontaneous oscillatory activity affects the responsiveness of a
neuronal network, using a neural network model of the visual cortex built from
Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the
isotropic local E-I and I-E synaptic connections were sufficiently strong, the
network commonly generated gamma frequency oscillatory firing patterns in
response to random feed-forward (FF) input spikes. This spontaneous oscillatory
network activity injects a periodic local current that could amplify a weak
synaptic input and enhance the network's responsiveness. When E-E
connections were added, we found that the strength of oscillation can be
modulated by varying the FF input strength without any changes in single neuron
properties or interneuron connectivity. The response modulation is proportional
to the oscillation strength, which leads to self-regulation such that the
cortical network selectively amplifies various FF inputs according to its
strength, without requiring any adaptation mechanism. We show that this
selective cortical amplification is controlled by E-E cell interactions. We also
found that this response amplification is spatially localized, which suggests
that the responsiveness modulation may also be spatially selective. This
suggests a generalized mechanism by which neural oscillatory activity can
enhance the selectivity of a neural network to FF inputs.
In the nervous system, information is delivered and processed digitally via
voltage spikes transmitted between cells. A neural system is characterized by
its input/output spike signal patterns. Generally, a network of neurons shows a
very different response pattern than that of a single neuron. In some cases, a
neural network generates interesting population activities, such as synchronized
oscillations, which are thought to modulate the response properties of the
network. However, the exact role of these neural oscillations is unknown. We
investigated the relationship between the oscillatory activity and the response
modulation in neural networks using computational simulation modeling. We found
that the response of the system is significantly modified by the oscillations in
the network. In particular, the responsiveness to weak inputs is remarkably
enhanced. This suggests that the oscillation can differentially amplify sensory
information depending on the input signal conditions. We conclude that a neural
network can dynamically modify its response properties by the selective
amplification of sensory signals due to oscillation activity, which may explain
some experimental observations and help us to better understand neural
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
Synchronously spiking neurons have been observed in the cerebral cortex and the hippocampus. In computer models, synchronous spike volleys may be propagated across appropriately connected neuron populations. However, it is unclear how the appropriate synaptic connectivity is set up during development and maintained during adult learning. We performed computer simulations to investigate the influence of temporally asymmetric Hebbian synaptic plasticity on the propagation of spike volleys. In addition to feedforward connections, recurrent connections were included between and within neuron populations and spike transmission delays varied due to axonal, synaptic and dendritic transmission. We found that repeated presentations of input volleys decreased the synaptic conductances of intragroup and feedback connections while synaptic conductances of feedforward connections with short delays became stronger than those of connections with longer delays. These adaptations led to the synchronization of spike volleys as they propagated across neuron populations. The findings suggests that temporally asymmetric Hebbian learning may enhance synchronized spiking within small populations of neurons in cortical and hippocampal areas and familiar stimuli may produce synchronized spike volleys that are rapidly propagated across neural tissue.
Local neocortical circuits are characterized by stereotypical physiological and structural features that subserve generic computational operations. These basic computations of the cortical microcircuit emerge through the interplay of neuronal connectivity, cellular intrinsic properties, and synaptic plasticity dynamics. How these interacting mechanisms generate specific computational operations in the cortical circuit remains largely unknown. Here, we identify the neurophysiological basis of both the rate of change and anticipation computations on synaptic inputs in a cortical circuit. Through biophysically realistic computer simulations and neuronal recordings, we show that the rate-of-change computation is operated robustly in cortical networks through the combination of two ubiquitous brain mechanisms: short-term synaptic depression and spike-frequency adaptation. We then show how this rate-of-change circuit can be embedded in a convergently connected network to anticipate temporally incoming synaptic inputs, in quantitative agreement with experimental findings on anticipatory responses to moving stimuli in the primary visual cortex. Given the robustness of the mechanism and the widespread nature of the physiological machinery involved, we suggest that rate-of-change computation and temporal anticipation are principal, hard-wired functions of neural information processing in the cortical microcircuit.
The cerebral cortex is the region of the brain whose intricate connectivity and physiology is thought to subserve most computations required for effective action in mammals. Through biophysically realistic computer simulation and experimental recordings in brain tissue, the authors show how a specific combination of physiological mechanisms often found in neurons of the cortex transforms an input signal into another signal that represents the rate of change of the slower components of the input. This is the first report of a neurobiological implementation of an approximate mathematical derivative in the cortex. Further, such a signal integrates naturally into a neurobiologically simple network that is able to generate a linear prediction of its inputs. Anticipation of information is a primary concern of spatially extended organisms which are subject to neural delays, and it has been demonstrated at various different levels: from the retina to sensori-motor integration. We present here a simple and general mechanism for anticipation that can operate incrementally within local circuits of the cortex, to compensate for time-consuming computations and conduction delays and thus contribute to effective real-time action.
Spike synchronization is thought to have a constructive role for feature integration, attention, associative learning, and the formation of bidirectionally connected Hebbian cell assemblies. By contrast, theoretical studies on spike-timing-dependent plasticity (STDP) report an inherently decoupling influence of spike synchronization on synaptic connections of coactivated neurons. For example, bidirectional synaptic connections as found in cortical areas could be reproduced only by assuming realistic models of STDP and rate coding. We resolve this conflict by theoretical analysis and simulation of various simple and realistic STDP models that provide a more complete characterization of conditions when STDP leads to either coupling or decoupling of neurons firing in synchrony. In particular, we show that STDP consistently couples synchronized neurons if key model parameters are matched to physiological data: First, synaptic potentiation must be significantly stronger than synaptic depression for small (positive or negative) time lags between presynaptic and postsynaptic spikes. Second, spike synchronization must be sufficiently imprecise, for example, within a time window of 5–10 ms instead of 1 ms. Third, axonal propagation delays should not be much larger than dendritic delays. Under these assumptions synchronized neurons will be strongly coupled leading to a dominance of bidirectional synaptic connections even for simple STDP models and low mean firing rates at the level of spontaneous activity.
Hebbian cell assemblies; learning; memory; spike synchronization; STDP; synaptic connectivity; synaptic plasticity
Stress, pervasive in society, contributes to over half of all work place accidents a year and over time can contribute to a variety of psychiatric disorders including depression, schizophrenia, and post-traumatic stress disorder. Stress impairs higher cognitive processes, dependent on the prefrontal cortex (PFC) and that involve maintenance and integration of information over extended periods, including working memory and attention. Substantial evidence has demonstrated a relationship between patterns of PFC neuron spiking activity (action-potential discharge) and components of delayed-response tasks used to probe PFC-dependent cognitive function in rats and monkeys. During delay periods of these tasks, persistent spiking activity is posited to be essential for the maintenance of information for working memory and attention. However, the degree to which stress-induced impairment in PFC-dependent cognition involves changes in task-related spiking rates or the ability for PFC neurons to retain information over time remains unknown. In the current study, spiking activity was recorded from the medial PFC of rats performing a delayed-response task of working memory during acute noise stress (93 db). Spike history-predicted discharge (SHPD) for PFC neurons was quantified as a measure of the degree to which ongoing neuronal discharge can be predicted by past spiking activity and reflects the degree to which past information is retained by these neurons over time. We found that PFC neuron discharge is predicted by their past spiking patterns for nearly one second. Acute stress impaired SHPD, selectively during delay intervals of the task, and simultaneously impaired task performance. Despite the reduction in delay-related SHPD, stress increased delay-related spiking rates. These findings suggest that neural codes utilizing SHPD within PFC networks likely reflects an additional important neurophysiological mechanism for maintenance of past information over time. Stress-related impairment of this mechanism is posited to contribute to the cognition-impairing actions of stress.
When faced with stressful situations, normal thought processes can be impaired including the ability to focus attention or make decisions requiring deep thought. These effects can result in accidents at the workplace and in combat, jeopardizing the lives of others. To date, the effect of stress on the way neurons communicate and represent cognitive functions is poorly understood. Differing theories have provided opposing predictions regarding the effects of stress-related chemical changes in the brain on neuronal activity of the prefrontal cortex (PFC). In this study, we show that stress increases the discharge rate of PFC neurons during planning and assessment phases of a task requiring the PFC. Additionally, using a point process model of neuronal activity we show that stress, nonetheless, impairs the ability of PFC neurons to retain representations of past events over time. Together these findings suggest that stress-related impairment of cognitive function may involve deficits in the ability of PFC neurons to retain information about past events beyond changes in neuronal firing rates. We believe that this advancement provides new insight into the neural codes of higher cognitive function that may lead to the development of novel treatments for stress-related diseases and conditions involving PFC-dependent cognitive impairment.
Cognitive impairment is common in epilepsy, particularly in memory function. Interictal spikes are thought to disrupt cognition, but it is difficult to delineate their contribution from general impairments in memory produced by etiology and seizures. We investigated the transient impact of focal interictal spikes on the hippocampus, a structure crucial for learning and memory and yet highly prone to interictal spikes in temporal lobe epilepsy.
Bilateral hippocampal depth electrodes were implanted into fourteen Sprague-Dawley rats, followed by intrahippocampal pilocarpine or saline infusion unilaterally. Rats that developed chronic spikes were trained in a hippocampal-dependent operant behavior task, delayed-match-to-sample. Depth EEG was recorded during 5,562 trials among five rats, and within-subject analyses evaluated the impact of hippocampal spikes on short-term memory operations.
Hippocampal spikes that occurred during memory retrieval strongly impaired performance (p<0.001). However, spikes that occurred during memory encoding or memory maintenance did not affect performance in those trials. Hippocampal spikes also affected response latency, adding approximately 0.48 seconds to the time taken to respond (p<0.001).
We found that focal interictal spike-related interference in cognition extends to structures in the limbic system, which required intrahippocampal recordings. Hippocampal spikes seem most harmful if they occur when hippocampal function is critical, extending human studies showing that cortical spikes are most disruptive during active cortical functioning. The cumulative effects of spikes could therefore impact general cognitive functioning. These results strengthen the argument that suppression of interictal spikes may improve memory and cognitive performance in patients with epilepsy.
Cortical neurons receive barrages of excitatory and inhibitory inputs which are not independent, as network structure and synaptic kinetics impose statistical correlations. Experiments in vitro and in vivo have demonstrated correlations between inhibitory and excitatory synaptic inputs in which inhibition lags behind excitation in cortical neurons. This delay arises in feed-forward inhibition (FFI) circuits and ensures that coincident excitation and inhibition do not preclude neuronal firing. Conversely, inhibition that is too delayed broadens neuronal integration times, thereby diminishing spike-time precision and increasing the firing frequency. This led us to hypothesize that the correlation between excitatory and inhibitory synaptic inputs modulates the encoding of information of neural spike trains. We tested this hypothesis by investigating the effect of such correlations on the information rate (IR) of spike trains using the Hodgkin-Huxley model in which both synaptic and membrane conductances are stochastic. We investigated two different synaptic input regimes: balanced synaptic conductances and balanced currents. Our results show that correlations arising from the synaptic kinetics, τ, and millisecond lags, δ, of inhibition relative to excitation strongly affect the IR of spike trains. In the regime of balanced synaptic currents, for short time lags (δ ~ 1 ms) there is an optimal τ that maximizes the IR of the postsynaptic spike train. Given the short time scales for monosynaptic inhibitory lags and synaptic decay kinetics reported in cortical neurons under physiological contexts, we propose that FFI in cortical circuits is poised to maximize the rate of information transfer between cortical neurons. Our results also provide a possible explanation for how certain drugs and genetic mutations affecting the synaptic kinetics can deteriorate information processing in the brain.
stochastic Hodgkin-Huxley model; synaptic kinetics; input correlation; information; feed-forward inhibition
Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA) networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing-dependent plasticity (STDP), which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.
plasticity; simulations; short-term plasticity; memory; temporal context model; associative learning