Search tips
Search criteria

Results 1-25 (1386209)

Clipboard (0)

Related Articles

1.  Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy 
PLoS ONE  2013;8(8):e70894.
Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding.
PMCID: PMC3733844  PMID: 23940662
2.  Changing the responses of cortical neurons from sub- to suprathreshold using single spikes in vivo 
eLife  null;2:e00012.
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity.
eLife digest
Nerve cells, called neurons, are one of the core components of the brain and form complex networks by connecting to other neurons via long, thin ‘wire-like’ processes called axons. Axons can extend across the brain, enabling neurons to form connections—or synapses—with thousands of others. It is through these complex networks that incoming information from sensory organs, such as the eye, is propagated through the brain and encoded.
The basic unit of communication between neurons is the action potential, often called a ‘spike’, which propagates along the network of axons and, through a chemical process at synapses, communicates with the postsynaptic neurons that the axon is connected to. These action potentials excite the neuron that they arrive at, and this excitatory process can generate a new action potential that then propagates along the axon to excite additional target neurons. In the visual areas of the cortex, neurons respond with action potentials when they ‘recognize’ a particular feature in a scene—a process called tuning. How a neuron becomes tuned to certain features in the world and not to others is unclear, as are the rules that enable a neuron to change what it is tuned to. What is clear, however, is that to understand this process is to understand the basis of sensory perception.
Memory storage and formation is thought to occur at synapses. The efficiency of signal transmission between neurons can increase or decrease over time, and this process is often referred to as synaptic plasticity. But for these synaptic changes to be transmitted to target neurons, the changes must alter the number of action potentials. Although it has been shown in vitro that the efficiency of synaptic transmission—that is the strength of the synapse—can be altered by changing the order in which the pre- and postsynaptic cells are activated (referred to as ‘Spike-timing-dependent plasticity’), this has never been shown to have an effect on the number of action potentials generated in a single neuron in vivo. It is therefore unknown whether this process is functionally relevant.
Now Pawlak et al. report that spike-timing-dependent plasticity in the visual cortex of anaesthetized rats can change the spiking of neurons in the visual cortex. They used a visual stimulus (a bar flashed up for half a second) to activate a presynaptic cell, and triggered a single action potential in the postsynaptic cell a very short time later. By repeatedly activating the cells in this way, they increased the strength of the synaptic connection between the two neurons. After a small number of these pairing activations, presenting the visual stimulus alone to the presynaptic cell was enough to trigger an action potential (a suprathreshold response) in the postsynaptic neuron—even though this was not the case prior to the pairing.
This study shows that timing rules known to change the strength of synaptic connections—and proposed to underlie learning and memory—have functional relevance in vivo, and that the timing of single action potentials can change the functional status of a cortical neuron.
PMCID: PMC3552422  PMID: 23359858
synaptic plasticity; STDP; visual cortex; circuits; in vivo; spiking patterns; rat
3.  State-Space Analysis of Time-Varying Higher-Order Spike Correlation for Multiple Neural Spike Train Data 
PLoS Computational Biology  2012;8(3):e1002385.
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods to neural spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order spike correlation organizes dynamically in relation to a behavioral demand.
Author Summary
Nearly half a century ago, the Canadian psychologist D. O. Hebb postulated the formation of assemblies of tightly connected cells in cortical recurrent networks because of changes in synaptic weight (Hebb's learning rule) by repetitive sensory stimulation of the network. Consequently, the activation of such an assembly for processing sensory or behavioral information is likely to be expressed by precisely coordinated spiking activities of the participating neurons. However, the available analysis techniques for multiple parallel neural spike data do not allow us to reveal the detailed structure of transiently active assemblies as indicated by their dynamical pairwise and higher-order spike correlations. Here, we construct a state-space model of dynamic spike interactions, and present a recursive Bayesian method that makes it possible to trace multiple neurons exhibiting such precisely coordinated spiking activities in a time-varying manner. We also formulate a hypothesis test of the underlying dynamic spike correlation, which enables us to detect the assemblies activated in association with behavioral events. Therefore, the proposed method can serve as a useful tool to test Hebb's cell assembly hypothesis.
PMCID: PMC3297562  PMID: 22412358
4.  Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs 
PLoS Computational Biology  2013;9(2):e1002897.
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Author Summary
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
PMCID: PMC3567188  PMID: 23408878
5.  Spectral Analysis of Input Spike Trains by Spike-Timing-Dependent Plasticity 
PLoS Computational Biology  2012;8(7):e1002584.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Author Summary
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
PMCID: PMC3390410  PMID: 22792056
6.  Dynamic Effective Connectivity of Inter-Areal Brain Circuits 
PLoS Computational Biology  2012;8(3):e1002438.
Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities.
Author Summary
The circuits of the brain must perform a daunting amount of functions. But how can “brain states” be flexibly controlled, given that anatomic inter-areal connections can be considered as fixed, on timescales relevant for behavior? We hypothesize that, thanks to the nonlinear interaction between brain rhythms, even a simple circuit involving few brain areas can originate a multitude of effective circuits, associated with alternative functions selectable “on demand”. A distinction is usually made between structural connectivity, which describes actual synaptic connections, and effective connectivity, quantifying, beyond correlation, directed inter-areal causal influences. In our study, we measure effective connectivity based on time-series of neural activity generated by model inter-areal circuits. We find that “causality follows dynamics”. We show indeed that different effective networks correspond to different dynamical states associated to a same structural network (in particular, different phase-locking patterns between local neuronal oscillations). We then find that “information follows causality” (and thus, again, dynamics). We demonstrate that different effective networks give rise to alternative modalities of information routing between brain areas wired together in a fixed structural network. In particular, we show that the self-organization of interacting “analog” rate oscillations control the flow of “digital-like” information encoded in complex spiking patterns.
PMCID: PMC3310731  PMID: 22457614
7.  Spike-Based Population Coding and Working Memory 
PLoS Computational Biology  2011;7(2):e1001080.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Author Summary
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
PMCID: PMC3040643  PMID: 21379319
8.  Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals 
PLoS Computational Biology  2012;8(8):e1002653.
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Author Summary
Unraveling the general organizing principles of connectivity in neural circuits is a crucial step towards understanding brain function. However, even the simpler task of assessing the global excitatory connectivity of a culture in vitro, where neurons form self-organized networks in absence of external stimuli, remains challenging. Neuronal cultures undergo spontaneous switching between episodes of synchronous bursting and quieter inter-burst periods. We introduce here a novel algorithm which aims at inferring the connectivity of neuronal cultures from calcium fluorescence recordings of their network dynamics. To achieve this goal, we develop a suitable generalization of Transfer Entropy, an information-theoretic measure of causal influences between time series. Unlike previous algorithmic approaches to reconstruction, Transfer Entropy is data-driven and does not rely on specific assumptions about neuronal firing statistics or network topology. We generate simulated calcium signals from networks with controlled ground-truth topology and purely excitatory interactions and show that, by restricting the analysis to inter-bursts periods, Transfer Entropy robustly achieves a good reconstruction performance for disparate network connectivities. Finally, we apply our method to real data and find evidence of non-random features in cultured networks, such as the existence of highly connected hub excitatory neurons and of an elevated (but not extreme) level of clustering.
PMCID: PMC3426566  PMID: 22927808
9.  Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks 
The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks.
PMCID: PMC3170885  PMID: 21991253
structural plasticity; connectivity; Hebbian learning; network; simulation; correlations; STDP; inhibitory plasticity
10.  Complex Events Initiated by Individual Spikes in the Human Cerebral Cortex  
PLoS Biology  2008;6(9):e222.
Synaptic interactions between neurons of the human cerebral cortex were not directly studied to date. We recorded the first dataset, to our knowledge, on the synaptic effect of identified human pyramidal cells on various types of postsynaptic neurons and reveal complex events triggered by individual action potentials in the human neocortical network. Brain slices were prepared from nonpathological samples of cortex that had to be removed for the surgical treatment of brain areas beneath association cortices of 58 patients aged 18 to 73 y. Simultaneous triple and quadruple whole-cell patch clamp recordings were performed testing mono- and polysynaptic potentials in target neurons following a single action potential fired by layer 2/3 pyramidal cells, and the temporal structure of events and underlying mechanisms were analyzed. In addition to monosynaptic postsynaptic potentials, individual action potentials in presynaptic pyramidal cells initiated long-lasting (37 ± 17 ms) sequences of events in the network lasting an order of magnitude longer than detected previously in other species. These event series were composed of specifically alternating glutamatergic and GABAergic postsynaptic potentials and required selective spike-to-spike coupling from pyramidal cells to GABAergic interneurons producing concomitant inhibitory as well as excitatory feed-forward action of GABA. Single action potentials of human neurons are sufficient to recruit Hebbian-like neuronal assemblies that are proposed to participate in cognitive processes.
Author Summary
We recorded the first connections, to our knowledge, between human nerve cells and reveal that a subset of interactions is so strong that some presynaptic cells are capable of eliciting action potentials in the postsynaptic target neurons. Interestingly, these strong connections selectively link pyramidal cells using the neurotransmitter glutamate to neurons releasing gamma aminobutyric acid (GABA). Moreover, the GABAergic neurons receiving the strong connections include different types: basket cells, which inhibit several target cell populations, and another type called the chandelier cells, which can be excitatory and target pyramidal cells only. Thus, the activation originating from a single pyramidal cell propagates to synchronously working inhibitory and excitatory GABAergic neurons. Inhibition then arrives to various neuron classes, but excitation finds only pyramidal cells, which in turn, can propagate excitation even further in the network of neurons. This chain of events revealed here leads to network activation approximately an order of magnitude longer than detected previously in response to a single action potential in a single neuron. Individual-neuron–activated groups of neurons resemble the so-called functional assemblies that were proposed as building blocks of higher order cognitive representations.
A novel study on connections between human neurons reveals that single spikes in pyramidal cells can activate synchronously timed assemblies through strong connections linking pyramidal cells with inhibitory and excitatory GABAergic neurons.
PMCID: PMC2528052  PMID: 18767905
11.  Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex 
PLoS Computational Biology  2013;9(1):e1002848.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
Author Summary
The computations that brain circuits can perform depend on their wiring. While a wiring diagram is still out of reach for major brain structures such as the neocortex and hippocampus, data on the overall distribution of synaptic connection strengths and the temporal fluctuations of individual synapses have recently become available. Specifically, there exists a small population of very strong and stable synaptic connections, which may form the physiological substrate of life-long memories. This population coexists with a big and ever changing population of much smaller and strongly fluctuating synaptic connections. So far it has remained unclear how these properties of networks in neocortex and hippocampus arise. Here we present a computational model that explains these fundamental properties of neural circuits as a consequence of network self-organization resulting from the combined action of different forms of neuronal plasticity. This self-organization is driven by a rich-get-richer effect induced by an associative synaptic learning mechanism which is kept in check by several homeostatic plasticity mechanisms stabilizing the network. The model highlights the role of self-organization in the formation of brain circuits and parsimoniously explains a range of recent findings about their fundamental properties.
PMCID: PMC3536614  PMID: 23300431
12.  Membrane Properties and the Balance between Excitation and Inhibition Control Gamma-Frequency Oscillations Arising from Feedback Inhibition 
PLoS Computational Biology  2012;8(1):e1002354.
Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30–80 Hz gamma rhythm in which network oscillations arise through ‘stochastic synchrony’ capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs.
Author Summary
The gamma rhythm is a prominent, 30–80-Hz EEG signal that is associated with cognition. Several classes of computational models have been posited to explain the gamma rhythm mechanistically. We study a particular class in which the gamma rhythm arises from delayed negative feedback. Our study is unique in that we calibrate the model from direct measurements. We also test the model's most critical predictions directly in experiments that take advantage of cutting-edge computer technologies able to simulate ion channels in real time. Our major findings are that a large amount of “background” synaptic input to neurons is necessary to promote the gamma rhythm; that inhibitory neurons are specially tuned to keep the gamma rhythm stable; that noise has a strong effect on network frequency; and that incoming sensory input can be represented with sensitivity that depends on the strength of excitatory-excitatory synapses and the number of neurons receiving the input. Overall, our results support the hypothesis that the gamma rhythm reflects the presence of delayed feedback that controls overall cortical activity on a cycle-by-cycle basis. Furthermore, its frequency range mainly reflects the timescale of synaptic inhibition, the degree of background activity, and noise levels in the network.
PMCID: PMC3261914  PMID: 22275859
13.  Phase-Coherence Transitions and Communication in the Gamma Range between Delay-Coupled Neuronal Populations 
PLoS Computational Biology  2014;10(7):e1003723.
Synchronization between neuronal populations plays an important role in information transmission between brain areas. In particular, collective oscillations emerging from the synchronized activity of thousands of neurons can increase the functional connectivity between neural assemblies by coherently coordinating their phases. This synchrony of neuronal activity can take place within a cortical patch or between different cortical regions. While short-range interactions between neurons involve just a few milliseconds, communication through long-range projections between different regions could take up to tens of milliseconds. How these heterogeneous transmission delays affect communication between neuronal populations is not well known. To address this question, we have studied the dynamics of two bidirectionally delayed-coupled neuronal populations using conductance-based spiking models, examining how different synaptic delays give rise to in-phase/anti-phase transitions at particular frequencies within the gamma range, and how this behavior is related to the phase coherence between the two populations at different frequencies. We have used spectral analysis and information theory to quantify the information exchanged between the two networks. For different transmission delays between the two coupled populations, we analyze how the local field potential and multi-unit activity calculated from one population convey information in response to a set of external inputs applied to the other population. The results confirm that zero-lag synchronization maximizes information transmission, although out-of-phase synchronization allows for efficient communication provided the coupling delay, the phase lag between the populations, and the frequency of the oscillations are properly matched.
Author Summary
The correct operation of the brain requires a carefully orchestrated activity, which includes the establishment of synchronized behavior among multiple neuronal populations. Synchronization of collective neuronal oscillations, in particular, has been suggested to mediate communication between brain areas, with the global oscillations acting as “information carriers” on which signals encoding specific stimuli or brain states are superimposed. But neuronal signals travel at finite speeds across the brain, thus leading to a wide range of delays in the coupling between neuronal populations. How the brain reaches the required level of coordination in the presence of such delays is still unclear. Here we approach this question in the case of two delay-coupled neuronal populations exhibiting collective oscillations in the gamma range. Our results show that effective communication can be reached even in the presence of relatively large delays between the populations, which self-organize in either in-phase or anti-phase synchronized states. In those states the transmission delays, phase difference, and oscillation frequency match to allow for communication at a wide range of coupling delays between brain areas.
PMCID: PMC4110076  PMID: 25058021
14.  LTS and FS Inhibitory Interneurons, Short-Term Synaptic Plasticity, and Cortical Circuit Dynamics 
PLoS Computational Biology  2011;7(10):e1002248.
Somatostatin-expressing, low threshold-spiking (LTS) cells and fast-spiking (FS) cells are two common subtypes of inhibitory neocortical interneuron. Excitatory synapses from regular-spiking (RS) pyramidal neurons to LTS cells strongly facilitate when activated repetitively, whereas RS-to-FS synapses depress. This suggests that LTS neurons may be especially relevant at high rate regimes and protect cortical circuits against over-excitation and seizures. However, the inhibitory synapses from LTS cells usually depress, which may reduce their effectiveness at high rates. We ask: by which mechanisms and at what firing rates do LTS neurons control the activity of cortical circuits responding to thalamic input, and how is control by LTS neurons different from that of FS neurons? We study rate models of circuits that include RS cells and LTS and FS inhibitory cells with short-term synaptic plasticity. LTS neurons shift the RS firing-rate vs. current curve to the right at high rates and reduce its slope at low rates; the LTS effect is delayed and prolonged. FS neurons always shift the curve to the right and affect RS firing transiently. In an RS-LTS-FS network, FS neurons reach a quiescent state if they receive weak input, LTS neurons are quiescent if RS neurons receive weak input, and both FS and RS populations are active if they both receive large inputs. In general, FS neurons tend to follow the spiking of RS neurons much more closely than LTS neurons. A novel type of facilitation-induced slow oscillations is observed above the LTS firing threshold with a frequency determined by the time scale of recovery from facilitation. To conclude, contrary to earlier proposals, LTS neurons affect the transient and steady state responses of cortical circuits over a range of firing rates, not only during the high rate regime; LTS neurons protect against over-activation about as well as FS neurons.
Author Summary
The brain consists of circuits of neurons that signal to one another via synapses. There are two classes of neurons: excitatory cells, which cause other neurons to become more active, and inhibitory neurons, which cause other neurons to become less active. It is thought that the activity of excitatory neurons is kept in check largely by inhibitory neurons; when such an inhibitory “brake” fails, a seizure can result. Inhibitory neurons of the low-threshold spiking (LTS) subtype can potentially fulfill this braking, or anticonvulsant, role because the synaptic input to these neurons facilitates, i.e., those neurons are active when excitatory neurons are strongly active. Using a computational model we show that, because the synaptic output of LTS neurons onto excitatory neurons depresses (decreases with activity), the ability of LTS neurons to prevent strong cortical activity and seizures is not qualitatively larger than that of inhibitory neurons of another subtype, the fast-spiking (FS) cells. Furthermore, short-term (∼one second) changes in the strength of synapses to and from LTS interneurons allow them to shape the behavior of cortical circuits even at modest rates of activity, and an RS-LTS-FS circuit is capable of producing slow oscillations, on the time scale of these short-term changes.
PMCID: PMC3203067  PMID: 22046121
15.  Desynchronization of Neocortical Networks by Asynchronous Release of GABA at Autaptic and Synaptic Contacts from Fast-Spiking Interneurons 
PLoS Biology  2010;8(9):e1000492.
An activity-dependent long-lasting asynchronous release of GABA from identified fast-spiking inhibitory neurons in the neocortex can impair the reliability and temporal precision of activity in a cortical network.
Networks of specific inhibitory interneurons regulate principal cell firing in several forms of neocortical activity. Fast-spiking (FS) interneurons are potently self-inhibited by GABAergic autaptic transmission, allowing them to precisely control their own firing dynamics and timing. Here we show that in FS interneurons, high-frequency trains of action potentials can generate a delayed and prolonged GABAergic self-inhibition due to sustained asynchronous release at FS-cell autapses. Asynchronous release of GABA is simultaneously recorded in connected pyramidal (P) neurons. Asynchronous and synchronous autaptic release show differential presynaptic Ca2+ sensitivity, suggesting that they rely on different Ca2+ sensors and/or involve distinct pools of vesicles. In addition, asynchronous release is modulated by the endogenous Ca2+ buffer parvalbumin. Functionally, asynchronous release decreases FS-cell spike reliability and reduces the ability of P neurons to integrate incoming stimuli into precise firing. Since each FS cell contacts many P neurons, asynchronous release from a single interneuron may desynchronize a large portion of the local network and disrupt cortical information processing.
Author Summary
In the cerebral cortex (neocortex) of the brain, fast-spiking (FS) inhibitory cells contact many principal pyramidal (P) neurons on their cell bodies, which allows the FS cells to control the generation of action potentials (neuronal output). FS-cell-mediated rhythmic and synchronous inhibition drives coherent network oscillations of large ensembles of P neurons, indicating that FS interneurons are needed for the precise timing of cortical circuits. Interestingly, FS cells are self-innervated by GABAergic autaptic contacts, whose synchronous activation regulates FS-cell precise firing. Here we report that high-frequency firing in FS interneurons results in a massive (>10-fold), delayed, and prolonged (for seconds) increase in inhibitory events, occurring at both autaptic (FS–FS) and synaptic (FS–P) sites. This increased inhibition is due to asynchronous release of GABA from presynaptic FS cells. Delayed and disorganized asynchronous inhibitory responses significantly affected the input–output properties of both FS and P neurons, suggesting that asynchronous release of GABA might promote network desynchronization. FS interneurons can fire at high frequency (>100 Hz) in vitro and in vivo, and are known for their reliable and precise signaling. Our results show an unprecedented action of these cells, by which their tight temporal control of cortical circuits can be broken when they are driven to fire above certain frequencies.
PMCID: PMC2946936  PMID: 20927409
16.  Feedforward and Feedback inhibition in Neostriatal GABAergic Spiny Neurons 
Brain research reviews  2007;58(2):272-281.
There are two distinct inhibitory GABAergic circuits in the neostriatum. The feedforward circuit consists of a relatively small population of GABAergic interneurons that receives excitatory input from the neocortex and exerts monosynaptic inhibition onto striatal spiny projection neurons. The feedback circuit comprises the numerous spiny projection neurons and their interconnections via local axon collaterals. This network has long been assumed to provide the majority of striatal GABAergic inhibition and to sharpen and shape striatal output through lateral inhibition, producing increased activity in the most strongly excited spiny cells at the expense of their less strongly excited neighbors.
Recent results, mostly from recording experiments of synaptically connected pairs of neurons, have revealed that the two GABAergic circuits differ markedly in terms of the total number of synapses made by each, the strength of the postsynaptic response detected at the soma, the extent of presynaptic convergence and divergence and the net effect of the activation of each circuit on the postsynaptic activity of the spiny neuron. These data have revealed that the feedforward inhibition is powerful and widespread, with spiking in a single interneuron being capable of significantly delaying or even blocking the generation of spikes in a large number of postsynaptic spiny neurons. In contrast, the postsynaptic effects of spiking in a single presynaptic spiny neuron on postsynaptic spiny neurons are weak when measured at the soma, and unable to significantly affect spike timing or generation. Further, reciprocity of synaptic connections between spiny neurons is only rarely observed.
These results suggest that the bulk of the fast inhibition that has the strongest effects on spiny neuron spike timing comes from the feedforward interneuronal system whereas the axon collateral feedback system acts principally at the dendrites to control local excitability as well as the overall level of activity of the spiny neuron.
PMCID: PMC2562631  PMID: 18054796
17.  Successful Reconstruction of a Physiological Circuit with Known Connectivity from Spiking Activity Alone 
PLoS Computational Biology  2013;9(7):e1003138.
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.
Author Summary
To appreciate how neural circuits control behaviors, we must understand two things. First, how the neurons comprising the circuit are connected, and second, how neurons and their connections change after learning or in response to neuromodulators. Neuronal connectivity is difficult to determine experimentally, whereas neuronal activity can often be readily measured. We describe a statistical model to estimate circuit connectivity directly from measured activity patterns. We use the timing relationships between observed spikes to predict synaptic interactions between simultaneously observed neurons. The model estimate provides each predicted connection with a curve that represents how strongly, and at which temporal delays, one circuit element effectively influences another. These curves are analogous to synaptic interactions of the level of the membrane potential of biological neurons and share some of their features such as being inhibitory or excitatory. We test our method on recordings from the pyloric circuit in the crab stomatogastric ganglion, a small circuit whose connectivity is completely known beforehand, and find that the predicted circuit matches the biological one — a result other techniques failed to achieve. In addition, we show that drug manipulations impacting the circuit are revealed by this technique. These results illustrate the utility of our analysis approach for inferring connections from neural spiking activity.
PMCID: PMC3708849  PMID: 23874181
18.  Hidden synaptic differences in a neural circuit underlie differential behavioral susceptibility to a neural injury 
eLife  2014;3:e02598.
Individuals vary in their responses to stroke and trauma, hampering predictions of outcomes. One reason might be that neural circuits contain hidden variability that becomes relevant only when those individuals are challenged by injury. We found that in the mollusc, Tritonia diomedea, subtle differences between animals within the neural circuit underlying swimming behavior had no behavioral relevance under normal conditions but caused differential vulnerability of the behavior to a particular brain lesion. The extent of motor impairment correlated with the site of spike initiation in a specific neuron in the neural circuit, which was determined by the strength of an inhibitory synapse onto this neuron. Artificially increasing or decreasing this inhibitory synaptic conductance with dynamic clamp correspondingly altered the extent of motor impairment by the lesion without affecting normal operation. The results suggest that neural circuit differences could serve as hidden phenotypes for predicting the behavioral outcome of neural damage.
eLife digest
The outcome of a traumatic brain injury or a stroke can vary considerably from person to person, making it difficult to provide a reliable prognosis for any individual person. If clinicians were able to predict outcomes with better accuracy, patients would benefit from more tailored treatments. However, the sheer complexity of the mammalian brain has hindered attempts to explain why similar damage to the brain can have such different effects on different individuals.
Now Sakurai et al. have used a mollusc model to show that the extensive variation between individuals could be caused by hidden differences in their neural networks. Crucially, this natural variation has no effect on normal behavior; it only becomes obvious when the brain is injured. The experiments were performed on a type of sea slug called Tritonia diomedea.
When these sea slugs encounter a predator they respond by swimming away, rhythmically flexing their whole body. This repetitive motion is driven by a specific neural network in which two neurons—called a cerebral 2 (C2) neuron and a ventral swim interneuron—play important roles. Both of these neurons are quite long and they run alongside each other in the brain, with the ventral swim interneuron being activated by signals sent from the C2 neuron at multiple ‘synaptic connections’ between the two.
Sakurai et al. showed that the strength of the connections between the C2 neuron and the ventral swim interneuron varied substantially between animals. However, despite this variation, the sea slugs still performed the same number of whole-body flexions as they swam.
Sakurai et al. then made a lesion to the brain, which removed about half of the connections between the C2 neuron and the ventral swim interneuron. This meant that the response of the sea slugs to predators depended on the strength of the remaining connections between the two neurons. Sakurai et al. found that the responses of some sea slugs were only mildly impaired, whereas others were severely impaired. This showed that although variations in the strength of the individual connections had no effect on swimming behavior of normal sea slugs, the same variations had a substantial effect when the brain was damaged. Moreover, by creating computer-generated synapses between the C2 neuron and the ventral swim interneuron, Sakurai et al. were able to change the level of impairment.
These findings suggest that the variability in human responses to brain injury could be due to hidden differences at the neuronal level. In everyday life, these differences are unimportant and individuals are able to function in similar ways in spite of subtle differences in their neuronal configurations. However, when the brain is damaged, the differences become more important. This suggests that certain configurations within neuronal networks are more resistant to brain damage than others.
PMCID: PMC4084405  PMID: 24920390
Tritonia diomedea; individual variability; synapse; neural injury; central pattern generator; dynamic clamp; other
19.  Spontaneous Local Gamma Oscillation Selectively Enhances Neural Network Responsiveness 
PLoS Computational Biology  2009;5(3):e1000342.
Synchronized oscillation is very commonly observed in many neuronal systems and might play an important role in the response properties of the system. We have studied how the spontaneous oscillatory activity affects the responsiveness of a neuronal network, using a neural network model of the visual cortex built from Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the isotropic local E-I and I-E synaptic connections were sufficiently strong, the network commonly generated gamma frequency oscillatory firing patterns in response to random feed-forward (FF) input spikes. This spontaneous oscillatory network activity injects a periodic local current that could amplify a weak synaptic input and enhance the network's responsiveness. When E-E connections were added, we found that the strength of oscillation can be modulated by varying the FF input strength without any changes in single neuron properties or interneuron connectivity. The response modulation is proportional to the oscillation strength, which leads to self-regulation such that the cortical network selectively amplifies various FF inputs according to its strength, without requiring any adaptation mechanism. We show that this selective cortical amplification is controlled by E-E cell interactions. We also found that this response amplification is spatially localized, which suggests that the responsiveness modulation may also be spatially selective. This suggests a generalized mechanism by which neural oscillatory activity can enhance the selectivity of a neural network to FF inputs.
Author Summary
In the nervous system, information is delivered and processed digitally via voltage spikes transmitted between cells. A neural system is characterized by its input/output spike signal patterns. Generally, a network of neurons shows a very different response pattern than that of a single neuron. In some cases, a neural network generates interesting population activities, such as synchronized oscillations, which are thought to modulate the response properties of the network. However, the exact role of these neural oscillations is unknown. We investigated the relationship between the oscillatory activity and the response modulation in neural networks using computational simulation modeling. We found that the response of the system is significantly modified by the oscillations in the network. In particular, the responsiveness to weak inputs is remarkably enhanced. This suggests that the oscillation can differentially amplify sensory information depending on the input signal conditions. We conclude that a neural network can dynamically modify its response properties by the selective amplification of sensory signals due to oscillation activity, which may explain some experimental observations and help us to better understand neural systems.
PMCID: PMC2659453  PMID: 19343222
20.  Interneurons and oligodendrocyte progenitors form a structured synaptic network in the developing neocortex 
eLife  null;4:e06953.
NG2 cells, oligodendrocyte progenitors, receive a major synaptic input from interneurons in the developing neocortex. It is presumed that these precursors integrate cortical networks where they act as sensors of neuronal activity. We show that NG2 cells of the developing somatosensory cortex form a transient and structured synaptic network with interneurons that follows its own rules of connectivity. Fast-spiking interneurons, highly connected to NG2 cells, target proximal subcellular domains containing GABAA receptors with γ2 subunits. Conversely, non-fast-spiking interneurons, poorly connected with these progenitors, target distal sites lacking this subunit. In the network, interneuron-NG2 cell connectivity maps exhibit a local spatial arrangement reflecting innervation only by the nearest interneurons. This microcircuit architecture shows a connectivity peak at PN10, coinciding with a switch to massive oligodendrocyte differentiation. Hence, GABAergic innervation of NG2 cells is temporally and spatially regulated from the subcellular to the network level in coordination with the onset of oligodendrogenesis.
eLife digest
Neurons are outnumbered in the brain by cells called glial cells. The brain contains various types of glial cells that perform a range of different jobs, including the supply of nutrients and the removal of dead neurons. The role of glial cells called oligodendrocytes is to produce a material called myelin: this is an electrical insulator that, when wrapped around a neuron, increases the speed at which electrical impulses can travel through the nervous system.
Neurons communicate with one another through specialized junctions called synapses, and at one time it was thought that only neurons could form synapses in the brain. However, this view had to be revised when researchers discovered synapses between neurons and glial cells called NG2 cells, which go on to become oligodendrocytes. These neuron-NG2 cell synapses have a lot in common with neuron–neuron synapses, but much less is known about them.
Orduz, Maldonado et al. have now examined these synapses in unprecedented detail by analyzing individual synapses between a type of neuron called an interneuron and an NG2 cell in mice aged only a few weeks. Interneurons can be divided into two major classes based on how quickly they fire, and Orduz, Maldonado et al. show that both types of interneuron form synapses with NG2 cells. However, these two types of interneuron establish synapses on different parts of the NG2 cell, and these synapses involve different receptor proteins.
Together, the synapses give rise to a local interneuron-NG2 cell network that reaches a peak of activity roughly two weeks after birth, after which the network is disassembled. This period of peak activity is accompanied by a sudden increase in the maturation of NG2 cells into oligodendrocytes. Further experiments are needed to test the possibility that activity in the interneuron-NG2 cell network acts as the trigger for the NG2 cells to turn into oligodendrocytes, which then supply myelin for the developing brain.
PMCID: PMC4432226  PMID: 25902404
interneurons; NG2 cells; synapses; GABAergic transmission; cortical development; paired-recordings; mouse
21.  Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons 
PLoS Computational Biology  2012;8(4):e1002478.
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Author Summary
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
PMCID: PMC3325187  PMID: 22511861
22.  Spike propagation synchronized by temporally asymmetric Hebbian learning 
Biological cybernetics  2002;87(5-6):440-445.
Synchronously spiking neurons have been observed in the cerebral cortex and the hippocampus. In computer models, synchronous spike volleys may be propagated across appropriately connected neuron populations. However, it is unclear how the appropriate synaptic connectivity is set up during development and maintained during adult learning. We performed computer simulations to investigate the influence of temporally asymmetric Hebbian synaptic plasticity on the propagation of spike volleys. In addition to feedforward connections, recurrent connections were included between and within neuron populations and spike transmission delays varied due to axonal, synaptic and dendritic transmission. We found that repeated presentations of input volleys decreased the synaptic conductances of intragroup and feedback connections while synaptic conductances of feedforward connections with short delays became stronger than those of connections with longer delays. These adaptations led to the synchronization of spike volleys as they propagated across neuron populations. The findings suggests that temporally asymmetric Hebbian learning may enhance synchronized spiking within small populations of neurons in cortical and hippocampal areas and familiar stimuli may produce synchronized spike volleys that are rapidly propagated across neural tissue.
PMCID: PMC2944018  PMID: 12461633
23.  Integrated Mechanisms of Anticipation and Rate-of-Change Computations in Cortical Circuits 
PLoS Computational Biology  2007;3(5):e82.
Local neocortical circuits are characterized by stereotypical physiological and structural features that subserve generic computational operations. These basic computations of the cortical microcircuit emerge through the interplay of neuronal connectivity, cellular intrinsic properties, and synaptic plasticity dynamics. How these interacting mechanisms generate specific computational operations in the cortical circuit remains largely unknown. Here, we identify the neurophysiological basis of both the rate of change and anticipation computations on synaptic inputs in a cortical circuit. Through biophysically realistic computer simulations and neuronal recordings, we show that the rate-of-change computation is operated robustly in cortical networks through the combination of two ubiquitous brain mechanisms: short-term synaptic depression and spike-frequency adaptation. We then show how this rate-of-change circuit can be embedded in a convergently connected network to anticipate temporally incoming synaptic inputs, in quantitative agreement with experimental findings on anticipatory responses to moving stimuli in the primary visual cortex. Given the robustness of the mechanism and the widespread nature of the physiological machinery involved, we suggest that rate-of-change computation and temporal anticipation are principal, hard-wired functions of neural information processing in the cortical microcircuit.
Author Summary
The cerebral cortex is the region of the brain whose intricate connectivity and physiology is thought to subserve most computations required for effective action in mammals. Through biophysically realistic computer simulation and experimental recordings in brain tissue, the authors show how a specific combination of physiological mechanisms often found in neurons of the cortex transforms an input signal into another signal that represents the rate of change of the slower components of the input. This is the first report of a neurobiological implementation of an approximate mathematical derivative in the cortex. Further, such a signal integrates naturally into a neurobiologically simple network that is able to generate a linear prediction of its inputs. Anticipation of information is a primary concern of spatially extended organisms which are subject to neural delays, and it has been demonstrated at various different levels: from the retina to sensori-motor integration. We present here a simple and general mechanism for anticipation that can operate incrementally within local circuits of the cortex, to compensate for time-consuming computations and conduction delays and thus contribute to effective real-time action.
PMCID: PMC1866356  PMID: 17500584
24.  Does Spike-Timing-Dependent Synaptic Plasticity Couple or Decouple Neurons Firing in Synchrony? 
Spike synchronization is thought to have a constructive role for feature integration, attention, associative learning, and the formation of bidirectionally connected Hebbian cell assemblies. By contrast, theoretical studies on spike-timing-dependent plasticity (STDP) report an inherently decoupling influence of spike synchronization on synaptic connections of coactivated neurons. For example, bidirectional synaptic connections as found in cortical areas could be reproduced only by assuming realistic models of STDP and rate coding. We resolve this conflict by theoretical analysis and simulation of various simple and realistic STDP models that provide a more complete characterization of conditions when STDP leads to either coupling or decoupling of neurons firing in synchrony. In particular, we show that STDP consistently couples synchronized neurons if key model parameters are matched to physiological data: First, synaptic potentiation must be significantly stronger than synaptic depression for small (positive or negative) time lags between presynaptic and postsynaptic spikes. Second, spike synchronization must be sufficiently imprecise, for example, within a time window of 5–10 ms instead of 1 ms. Third, axonal propagation delays should not be much larger than dendritic delays. Under these assumptions synchronized neurons will be strongly coupled leading to a dominance of bidirectional synaptic connections even for simple STDP models and low mean firing rates at the level of spontaneous activity.
PMCID: PMC3424530  PMID: 22936909
Hebbian cell assemblies; learning; memory; spike synchronization; STDP; synaptic connectivity; synaptic plasticity
25.  Modelling the Effects of Electrical Coupling between Unmyelinated Axons of Brainstem Neurons Controlling Rhythmic Activity 
PLoS Computational Biology  2015;11(5):e1004240.
Gap junctions between fine unmyelinated axons can electrically couple groups of brain neurons to synchronise firing and contribute to rhythmic activity. To explore the distribution and significance of electrical coupling, we modelled a well analysed, small population of brainstem neurons which drive swimming in young frog tadpoles. A passive network of 30 multicompartmental neurons with unmyelinated axons was used to infer that: axon-axon gap junctions close to the soma gave the best match to experimentally measured coupling coefficients; axon diameter had a strong influence on coupling; most neurons were coupled indirectly via the axons of other neurons. When active channels were added, gap junctions could make action potential propagation along the thin axons unreliable. Increased sodium and decreased potassium channel densities in the initial axon segment improved action potential propagation. Modelling suggested that the single spike firing to step current injection observed in whole-cell recordings is not a cellular property but a dynamic consequence of shunting resulting from electrical coupling. Without electrical coupling, firing of the population during depolarising current was unsynchronised; with coupling, the population showed synchronous recruitment and rhythmic firing. When activated instead by increasing levels of modelled sensory pathway input, the population without electrical coupling was recruited incrementally to unpatterned activity. However, when coupled, the population was recruited all-or-none at threshold into a rhythmic swimming pattern: the tadpole “decided” to swim. Modelling emphasises uncertainties about fine unmyelinated axon physiology but, when informed by biological data, makes general predictions about gap junctions: locations close to the soma; relatively small numbers; many indirect connections between neurons; cause of action potential propagation failure in fine axons; misleading alteration of intrinsic firing properties. Modelling also indicates that electrical coupling within a population can synchronize recruitment of neurons and their pacemaker firing during rhythmic activity.
Author Summary
Some groups of nerve cells in the brain are connected to each other electrically where their processes make contact and form specialized “gap” junctions. The simplest function of electrical connections is to make activity propagate faster by avoiding the delays resulting from chemical messengers at synaptic connections. In other cases, especially in higher brain regions where more spread out nerve cells may be connected by their axons, the function of electrical coupling is less clear. To understand this type of electrical connection better we have built computer models of a group of electrically coupled nerve cells in the brain which control swimming in very young frog tadpoles. We show that the coupling can be indirect, via other members of the group, and can profoundly influence the properties of the nerve cells which would be recorded during real experiments. The main role of the coupling is to synchronise the firing of the group so they are all recruited together when the tadpole is stimulated and then fire in a rhythm suitable to drive swimming movements. The results from this simple animal raise issues which will help to understand coupling in more complex brains.
PMCID: PMC4425518  PMID: 25954930

Results 1-25 (1386209)