PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1184230)

Clipboard (0)
None

Related Articles

1.  Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy 
PLoS ONE  2013;8(8):e70894.
Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding.
doi:10.1371/journal.pone.0070894
PMCID: PMC3733844  PMID: 23940662
2.  State-Space Analysis of Time-Varying Higher-Order Spike Correlation for Multiple Neural Spike Train Data 
PLoS Computational Biology  2012;8(3):e1002385.
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods to neural spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order spike correlation organizes dynamically in relation to a behavioral demand.
Author Summary
Nearly half a century ago, the Canadian psychologist D. O. Hebb postulated the formation of assemblies of tightly connected cells in cortical recurrent networks because of changes in synaptic weight (Hebb's learning rule) by repetitive sensory stimulation of the network. Consequently, the activation of such an assembly for processing sensory or behavioral information is likely to be expressed by precisely coordinated spiking activities of the participating neurons. However, the available analysis techniques for multiple parallel neural spike data do not allow us to reveal the detailed structure of transiently active assemblies as indicated by their dynamical pairwise and higher-order spike correlations. Here, we construct a state-space model of dynamic spike interactions, and present a recursive Bayesian method that makes it possible to trace multiple neurons exhibiting such precisely coordinated spiking activities in a time-varying manner. We also formulate a hypothesis test of the underlying dynamic spike correlation, which enables us to detect the assemblies activated in association with behavioral events. Therefore, the proposed method can serve as a useful tool to test Hebb's cell assembly hypothesis.
doi:10.1371/journal.pcbi.1002385
PMCID: PMC3297562  PMID: 22412358
3.  Changing the responses of cortical neurons from sub- to suprathreshold using single spikes in vivo 
eLife  2013;2:e00012.
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity.
DOI: http://dx.doi.org/10.7554/eLife.00012.001
eLife digest
Nerve cells, called neurons, are one of the core components of the brain and form complex networks by connecting to other neurons via long, thin ‘wire-like’ processes called axons. Axons can extend across the brain, enabling neurons to form connections—or synapses—with thousands of others. It is through these complex networks that incoming information from sensory organs, such as the eye, is propagated through the brain and encoded.
The basic unit of communication between neurons is the action potential, often called a ‘spike’, which propagates along the network of axons and, through a chemical process at synapses, communicates with the postsynaptic neurons that the axon is connected to. These action potentials excite the neuron that they arrive at, and this excitatory process can generate a new action potential that then propagates along the axon to excite additional target neurons. In the visual areas of the cortex, neurons respond with action potentials when they ‘recognize’ a particular feature in a scene—a process called tuning. How a neuron becomes tuned to certain features in the world and not to others is unclear, as are the rules that enable a neuron to change what it is tuned to. What is clear, however, is that to understand this process is to understand the basis of sensory perception.
Memory storage and formation is thought to occur at synapses. The efficiency of signal transmission between neurons can increase or decrease over time, and this process is often referred to as synaptic plasticity. But for these synaptic changes to be transmitted to target neurons, the changes must alter the number of action potentials. Although it has been shown in vitro that the efficiency of synaptic transmission—that is the strength of the synapse—can be altered by changing the order in which the pre- and postsynaptic cells are activated (referred to as ‘Spike-timing-dependent plasticity’), this has never been shown to have an effect on the number of action potentials generated in a single neuron in vivo. It is therefore unknown whether this process is functionally relevant.
Now Pawlak et al. report that spike-timing-dependent plasticity in the visual cortex of anaesthetized rats can change the spiking of neurons in the visual cortex. They used a visual stimulus (a bar flashed up for half a second) to activate a presynaptic cell, and triggered a single action potential in the postsynaptic cell a very short time later. By repeatedly activating the cells in this way, they increased the strength of the synaptic connection between the two neurons. After a small number of these pairing activations, presenting the visual stimulus alone to the presynaptic cell was enough to trigger an action potential (a suprathreshold response) in the postsynaptic neuron—even though this was not the case prior to the pairing.
This study shows that timing rules known to change the strength of synaptic connections—and proposed to underlie learning and memory—have functional relevance in vivo, and that the timing of single action potentials can change the functional status of a cortical neuron.
DOI: http://dx.doi.org/10.7554/eLife.00012.002
doi:10.7554/eLife.00012
PMCID: PMC3552422  PMID: 23359858
synaptic plasticity; STDP; visual cortex; circuits; in vivo; spiking patterns; rat
4.  Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs 
PLoS Computational Biology  2013;9(2):e1002897.
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Author Summary
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
doi:10.1371/journal.pcbi.1002897
PMCID: PMC3567188  PMID: 23408878
5.  Dynamic Effective Connectivity of Inter-Areal Brain Circuits 
PLoS Computational Biology  2012;8(3):e1002438.
Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities.
Author Summary
The circuits of the brain must perform a daunting amount of functions. But how can “brain states” be flexibly controlled, given that anatomic inter-areal connections can be considered as fixed, on timescales relevant for behavior? We hypothesize that, thanks to the nonlinear interaction between brain rhythms, even a simple circuit involving few brain areas can originate a multitude of effective circuits, associated with alternative functions selectable “on demand”. A distinction is usually made between structural connectivity, which describes actual synaptic connections, and effective connectivity, quantifying, beyond correlation, directed inter-areal causal influences. In our study, we measure effective connectivity based on time-series of neural activity generated by model inter-areal circuits. We find that “causality follows dynamics”. We show indeed that different effective networks correspond to different dynamical states associated to a same structural network (in particular, different phase-locking patterns between local neuronal oscillations). We then find that “information follows causality” (and thus, again, dynamics). We demonstrate that different effective networks give rise to alternative modalities of information routing between brain areas wired together in a fixed structural network. In particular, we show that the self-organization of interacting “analog” rate oscillations control the flow of “digital-like” information encoded in complex spiking patterns.
doi:10.1371/journal.pcbi.1002438
PMCID: PMC3310731  PMID: 22457614
6.  Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals 
PLoS Computational Biology  2012;8(8):e1002653.
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Author Summary
Unraveling the general organizing principles of connectivity in neural circuits is a crucial step towards understanding brain function. However, even the simpler task of assessing the global excitatory connectivity of a culture in vitro, where neurons form self-organized networks in absence of external stimuli, remains challenging. Neuronal cultures undergo spontaneous switching between episodes of synchronous bursting and quieter inter-burst periods. We introduce here a novel algorithm which aims at inferring the connectivity of neuronal cultures from calcium fluorescence recordings of their network dynamics. To achieve this goal, we develop a suitable generalization of Transfer Entropy, an information-theoretic measure of causal influences between time series. Unlike previous algorithmic approaches to reconstruction, Transfer Entropy is data-driven and does not rely on specific assumptions about neuronal firing statistics or network topology. We generate simulated calcium signals from networks with controlled ground-truth topology and purely excitatory interactions and show that, by restricting the analysis to inter-bursts periods, Transfer Entropy robustly achieves a good reconstruction performance for disparate network connectivities. Finally, we apply our method to real data and find evidence of non-random features in cultured networks, such as the existence of highly connected hub excitatory neurons and of an elevated (but not extreme) level of clustering.
doi:10.1371/journal.pcbi.1002653
PMCID: PMC3426566  PMID: 22927808
7.  Complex Events Initiated by Individual Spikes in the Human Cerebral Cortex  
PLoS Biology  2008;6(9):e222.
Synaptic interactions between neurons of the human cerebral cortex were not directly studied to date. We recorded the first dataset, to our knowledge, on the synaptic effect of identified human pyramidal cells on various types of postsynaptic neurons and reveal complex events triggered by individual action potentials in the human neocortical network. Brain slices were prepared from nonpathological samples of cortex that had to be removed for the surgical treatment of brain areas beneath association cortices of 58 patients aged 18 to 73 y. Simultaneous triple and quadruple whole-cell patch clamp recordings were performed testing mono- and polysynaptic potentials in target neurons following a single action potential fired by layer 2/3 pyramidal cells, and the temporal structure of events and underlying mechanisms were analyzed. In addition to monosynaptic postsynaptic potentials, individual action potentials in presynaptic pyramidal cells initiated long-lasting (37 ± 17 ms) sequences of events in the network lasting an order of magnitude longer than detected previously in other species. These event series were composed of specifically alternating glutamatergic and GABAergic postsynaptic potentials and required selective spike-to-spike coupling from pyramidal cells to GABAergic interneurons producing concomitant inhibitory as well as excitatory feed-forward action of GABA. Single action potentials of human neurons are sufficient to recruit Hebbian-like neuronal assemblies that are proposed to participate in cognitive processes.
Author Summary
We recorded the first connections, to our knowledge, between human nerve cells and reveal that a subset of interactions is so strong that some presynaptic cells are capable of eliciting action potentials in the postsynaptic target neurons. Interestingly, these strong connections selectively link pyramidal cells using the neurotransmitter glutamate to neurons releasing gamma aminobutyric acid (GABA). Moreover, the GABAergic neurons receiving the strong connections include different types: basket cells, which inhibit several target cell populations, and another type called the chandelier cells, which can be excitatory and target pyramidal cells only. Thus, the activation originating from a single pyramidal cell propagates to synchronously working inhibitory and excitatory GABAergic neurons. Inhibition then arrives to various neuron classes, but excitation finds only pyramidal cells, which in turn, can propagate excitation even further in the network of neurons. This chain of events revealed here leads to network activation approximately an order of magnitude longer than detected previously in response to a single action potential in a single neuron. Individual-neuron–activated groups of neurons resemble the so-called functional assemblies that were proposed as building blocks of higher order cognitive representations.
A novel study on connections between human neurons reveals that single spikes in pyramidal cells can activate synchronously timed assemblies through strong connections linking pyramidal cells with inhibitory and excitatory GABAergic neurons.
doi:10.1371/journal.pbio.0060222
PMCID: PMC2528052  PMID: 18767905
8.  Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex 
PLoS Computational Biology  2013;9(1):e1002848.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
Author Summary
The computations that brain circuits can perform depend on their wiring. While a wiring diagram is still out of reach for major brain structures such as the neocortex and hippocampus, data on the overall distribution of synaptic connection strengths and the temporal fluctuations of individual synapses have recently become available. Specifically, there exists a small population of very strong and stable synaptic connections, which may form the physiological substrate of life-long memories. This population coexists with a big and ever changing population of much smaller and strongly fluctuating synaptic connections. So far it has remained unclear how these properties of networks in neocortex and hippocampus arise. Here we present a computational model that explains these fundamental properties of neural circuits as a consequence of network self-organization resulting from the combined action of different forms of neuronal plasticity. This self-organization is driven by a rich-get-richer effect induced by an associative synaptic learning mechanism which is kept in check by several homeostatic plasticity mechanisms stabilizing the network. The model highlights the role of self-organization in the formation of brain circuits and parsimoniously explains a range of recent findings about their fundamental properties.
doi:10.1371/journal.pcbi.1002848
PMCID: PMC3536614  PMID: 23300431
9.  Phase-Coherence Transitions and Communication in the Gamma Range between Delay-Coupled Neuronal Populations 
PLoS Computational Biology  2014;10(7):e1003723.
Synchronization between neuronal populations plays an important role in information transmission between brain areas. In particular, collective oscillations emerging from the synchronized activity of thousands of neurons can increase the functional connectivity between neural assemblies by coherently coordinating their phases. This synchrony of neuronal activity can take place within a cortical patch or between different cortical regions. While short-range interactions between neurons involve just a few milliseconds, communication through long-range projections between different regions could take up to tens of milliseconds. How these heterogeneous transmission delays affect communication between neuronal populations is not well known. To address this question, we have studied the dynamics of two bidirectionally delayed-coupled neuronal populations using conductance-based spiking models, examining how different synaptic delays give rise to in-phase/anti-phase transitions at particular frequencies within the gamma range, and how this behavior is related to the phase coherence between the two populations at different frequencies. We have used spectral analysis and information theory to quantify the information exchanged between the two networks. For different transmission delays between the two coupled populations, we analyze how the local field potential and multi-unit activity calculated from one population convey information in response to a set of external inputs applied to the other population. The results confirm that zero-lag synchronization maximizes information transmission, although out-of-phase synchronization allows for efficient communication provided the coupling delay, the phase lag between the populations, and the frequency of the oscillations are properly matched.
Author Summary
The correct operation of the brain requires a carefully orchestrated activity, which includes the establishment of synchronized behavior among multiple neuronal populations. Synchronization of collective neuronal oscillations, in particular, has been suggested to mediate communication between brain areas, with the global oscillations acting as “information carriers” on which signals encoding specific stimuli or brain states are superimposed. But neuronal signals travel at finite speeds across the brain, thus leading to a wide range of delays in the coupling between neuronal populations. How the brain reaches the required level of coordination in the presence of such delays is still unclear. Here we approach this question in the case of two delay-coupled neuronal populations exhibiting collective oscillations in the gamma range. Our results show that effective communication can be reached even in the presence of relatively large delays between the populations, which self-organize in either in-phase or anti-phase synchronized states. In those states the transmission delays, phase difference, and oscillation frequency match to allow for communication at a wide range of coupling delays between brain areas.
doi:10.1371/journal.pcbi.1003723
PMCID: PMC4110076  PMID: 25058021
10.  Spectral Analysis of Input Spike Trains by Spike-Timing-Dependent Plasticity 
PLoS Computational Biology  2012;8(7):e1002584.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Author Summary
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
doi:10.1371/journal.pcbi.1002584
PMCID: PMC3390410  PMID: 22792056
11.  Spike-Based Population Coding and Working Memory 
PLoS Computational Biology  2011;7(2):e1001080.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Author Summary
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
doi:10.1371/journal.pcbi.1001080
PMCID: PMC3040643  PMID: 21379319
12.  Hidden synaptic differences in a neural circuit underlie differential behavioral susceptibility to a neural injury 
eLife  2014;3:e02598.
Individuals vary in their responses to stroke and trauma, hampering predictions of outcomes. One reason might be that neural circuits contain hidden variability that becomes relevant only when those individuals are challenged by injury. We found that in the mollusc, Tritonia diomedea, subtle differences between animals within the neural circuit underlying swimming behavior had no behavioral relevance under normal conditions but caused differential vulnerability of the behavior to a particular brain lesion. The extent of motor impairment correlated with the site of spike initiation in a specific neuron in the neural circuit, which was determined by the strength of an inhibitory synapse onto this neuron. Artificially increasing or decreasing this inhibitory synaptic conductance with dynamic clamp correspondingly altered the extent of motor impairment by the lesion without affecting normal operation. The results suggest that neural circuit differences could serve as hidden phenotypes for predicting the behavioral outcome of neural damage.
DOI: http://dx.doi.org/10.7554/eLife.02598.001
eLife digest
The outcome of a traumatic brain injury or a stroke can vary considerably from person to person, making it difficult to provide a reliable prognosis for any individual person. If clinicians were able to predict outcomes with better accuracy, patients would benefit from more tailored treatments. However, the sheer complexity of the mammalian brain has hindered attempts to explain why similar damage to the brain can have such different effects on different individuals.
Now Sakurai et al. have used a mollusc model to show that the extensive variation between individuals could be caused by hidden differences in their neural networks. Crucially, this natural variation has no effect on normal behavior; it only becomes obvious when the brain is injured. The experiments were performed on a type of sea slug called Tritonia diomedea.
When these sea slugs encounter a predator they respond by swimming away, rhythmically flexing their whole body. This repetitive motion is driven by a specific neural network in which two neurons—called a cerebral 2 (C2) neuron and a ventral swim interneuron—play important roles. Both of these neurons are quite long and they run alongside each other in the brain, with the ventral swim interneuron being activated by signals sent from the C2 neuron at multiple ‘synaptic connections’ between the two.
Sakurai et al. showed that the strength of the connections between the C2 neuron and the ventral swim interneuron varied substantially between animals. However, despite this variation, the sea slugs still performed the same number of whole-body flexions as they swam.
Sakurai et al. then made a lesion to the brain, which removed about half of the connections between the C2 neuron and the ventral swim interneuron. This meant that the response of the sea slugs to predators depended on the strength of the remaining connections between the two neurons. Sakurai et al. found that the responses of some sea slugs were only mildly impaired, whereas others were severely impaired. This showed that although variations in the strength of the individual connections had no effect on swimming behavior of normal sea slugs, the same variations had a substantial effect when the brain was damaged. Moreover, by creating computer-generated synapses between the C2 neuron and the ventral swim interneuron, Sakurai et al. were able to change the level of impairment.
These findings suggest that the variability in human responses to brain injury could be due to hidden differences at the neuronal level. In everyday life, these differences are unimportant and individuals are able to function in similar ways in spite of subtle differences in their neuronal configurations. However, when the brain is damaged, the differences become more important. This suggests that certain configurations within neuronal networks are more resistant to brain damage than others.
DOI: http://dx.doi.org/10.7554/eLife.02598.002
doi:10.7554/eLife.02598
PMCID: PMC4084405  PMID: 24920390
Tritonia diomedea; individual variability; synapse; neural injury; central pattern generator; dynamic clamp; other
13.  Successful Reconstruction of a Physiological Circuit with Known Connectivity from Spiking Activity Alone 
PLoS Computational Biology  2013;9(7):e1003138.
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.
Author Summary
To appreciate how neural circuits control behaviors, we must understand two things. First, how the neurons comprising the circuit are connected, and second, how neurons and their connections change after learning or in response to neuromodulators. Neuronal connectivity is difficult to determine experimentally, whereas neuronal activity can often be readily measured. We describe a statistical model to estimate circuit connectivity directly from measured activity patterns. We use the timing relationships between observed spikes to predict synaptic interactions between simultaneously observed neurons. The model estimate provides each predicted connection with a curve that represents how strongly, and at which temporal delays, one circuit element effectively influences another. These curves are analogous to synaptic interactions of the level of the membrane potential of biological neurons and share some of their features such as being inhibitory or excitatory. We test our method on recordings from the pyloric circuit in the crab stomatogastric ganglion, a small circuit whose connectivity is completely known beforehand, and find that the predicted circuit matches the biological one — a result other techniques failed to achieve. In addition, we show that drug manipulations impacting the circuit are revealed by this technique. These results illustrate the utility of our analysis approach for inferring connections from neural spiking activity.
doi:10.1371/journal.pcbi.1003138
PMCID: PMC3708849  PMID: 23874181
14.  Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks 
The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks.
doi:10.3389/fncom.2011.00037
PMCID: PMC3170885  PMID: 21991253
structural plasticity; connectivity; Hebbian learning; network; simulation; correlations; STDP; inhibitory plasticity
15.  Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons 
PLoS Computational Biology  2012;8(4):e1002478.
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Author Summary
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
doi:10.1371/journal.pcbi.1002478
PMCID: PMC3325187  PMID: 22511861
16.  Spike propagation synchronized by temporally asymmetric Hebbian learning 
Biological cybernetics  2002;87(5-6):440-445.
Synchronously spiking neurons have been observed in the cerebral cortex and the hippocampus. In computer models, synchronous spike volleys may be propagated across appropriately connected neuron populations. However, it is unclear how the appropriate synaptic connectivity is set up during development and maintained during adult learning. We performed computer simulations to investigate the influence of temporally asymmetric Hebbian synaptic plasticity on the propagation of spike volleys. In addition to feedforward connections, recurrent connections were included between and within neuron populations and spike transmission delays varied due to axonal, synaptic and dendritic transmission. We found that repeated presentations of input volleys decreased the synaptic conductances of intragroup and feedback connections while synaptic conductances of feedforward connections with short delays became stronger than those of connections with longer delays. These adaptations led to the synchronization of spike volleys as they propagated across neuron populations. The findings suggests that temporally asymmetric Hebbian learning may enhance synchronized spiking within small populations of neurons in cortical and hippocampal areas and familiar stimuli may produce synchronized spike volleys that are rapidly propagated across neural tissue.
doi:10.1007/s00422-002-0355-9
PMCID: PMC2944018  PMID: 12461633
17.  Integrated Mechanisms of Anticipation and Rate-of-Change Computations in Cortical Circuits 
PLoS Computational Biology  2007;3(5):e82.
Local neocortical circuits are characterized by stereotypical physiological and structural features that subserve generic computational operations. These basic computations of the cortical microcircuit emerge through the interplay of neuronal connectivity, cellular intrinsic properties, and synaptic plasticity dynamics. How these interacting mechanisms generate specific computational operations in the cortical circuit remains largely unknown. Here, we identify the neurophysiological basis of both the rate of change and anticipation computations on synaptic inputs in a cortical circuit. Through biophysically realistic computer simulations and neuronal recordings, we show that the rate-of-change computation is operated robustly in cortical networks through the combination of two ubiquitous brain mechanisms: short-term synaptic depression and spike-frequency adaptation. We then show how this rate-of-change circuit can be embedded in a convergently connected network to anticipate temporally incoming synaptic inputs, in quantitative agreement with experimental findings on anticipatory responses to moving stimuli in the primary visual cortex. Given the robustness of the mechanism and the widespread nature of the physiological machinery involved, we suggest that rate-of-change computation and temporal anticipation are principal, hard-wired functions of neural information processing in the cortical microcircuit.
Author Summary
The cerebral cortex is the region of the brain whose intricate connectivity and physiology is thought to subserve most computations required for effective action in mammals. Through biophysically realistic computer simulation and experimental recordings in brain tissue, the authors show how a specific combination of physiological mechanisms often found in neurons of the cortex transforms an input signal into another signal that represents the rate of change of the slower components of the input. This is the first report of a neurobiological implementation of an approximate mathematical derivative in the cortex. Further, such a signal integrates naturally into a neurobiologically simple network that is able to generate a linear prediction of its inputs. Anticipation of information is a primary concern of spatially extended organisms which are subject to neural delays, and it has been demonstrated at various different levels: from the retina to sensori-motor integration. We present here a simple and general mechanism for anticipation that can operate incrementally within local circuits of the cortex, to compensate for time-consuming computations and conduction delays and thus contribute to effective real-time action.
doi:10.1371/journal.pcbi.0030082
PMCID: PMC1866356  PMID: 17500584
18.  Membrane Properties and the Balance between Excitation and Inhibition Control Gamma-Frequency Oscillations Arising from Feedback Inhibition 
PLoS Computational Biology  2012;8(1):e1002354.
Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30–80 Hz gamma rhythm in which network oscillations arise through ‘stochastic synchrony’ capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs.
Author Summary
The gamma rhythm is a prominent, 30–80-Hz EEG signal that is associated with cognition. Several classes of computational models have been posited to explain the gamma rhythm mechanistically. We study a particular class in which the gamma rhythm arises from delayed negative feedback. Our study is unique in that we calibrate the model from direct measurements. We also test the model's most critical predictions directly in experiments that take advantage of cutting-edge computer technologies able to simulate ion channels in real time. Our major findings are that a large amount of “background” synaptic input to neurons is necessary to promote the gamma rhythm; that inhibitory neurons are specially tuned to keep the gamma rhythm stable; that noise has a strong effect on network frequency; and that incoming sensory input can be represented with sensitivity that depends on the strength of excitatory-excitatory synapses and the number of neurons receiving the input. Overall, our results support the hypothesis that the gamma rhythm reflects the presence of delayed feedback that controls overall cortical activity on a cycle-by-cycle basis. Furthermore, its frequency range mainly reflects the timescale of synaptic inhibition, the degree of background activity, and noise levels in the network.
doi:10.1371/journal.pcbi.1002354
PMCID: PMC3261914  PMID: 22275859
19.  Network-State Modulation of Power-Law Frequency-Scaling in Visual Cortical Neurons 
PLoS Computational Biology  2009;5(9):e1000519.
Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of Vm activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the Vm reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the “effective” connectivity responsible for the dynamical signature of the population signals measured at different integration levels, from Vm to LFP, EEG and fMRI.
Author Summary
Intracellular recording of neocortical neurons provides an opportunity of characterizing the statistical signature of the synaptic bombardment to which it is submitted. Indeed the membrane potential displays intense fluctuations which reflect the cumulative activity of thousands of input neurons. In sensory cortical areas, this measure could be used to estimate the correlational structure of the external drive. We show that changes in the statistical properties of network activity, namely the local correlation between neurons, can be detected by analyzing the power spectrum density (PSD) of the subthreshold membrane potential. These PSD can be fitted by a power-law function 1/fα in the upper temporal frequency range. In vivo recordings in primary visual cortex show that the α exponent varies with the statistics of the sensory input. Most remarkably, the exponent observed in the ongoing activity is indistinguishable from that evoked by natural visual statistics. These results are emulated by models which demonstrate that the exponent α is determined by the local level of correlation imposed in the recurrent network activity. Similar relationships are also reproduced in cortical neurons recorded in vitro with artificial synaptic inputs by controlling in computo the level of correlation in real time.
doi:10.1371/journal.pcbi.1000519
PMCID: PMC2740863  PMID: 19779556
20.  On how correlations between excitatory and inhibitory synaptic inputs maximize the information rate of neuronal firing 
Cortical neurons receive barrages of excitatory and inhibitory inputs which are not independent, as network structure and synaptic kinetics impose statistical correlations. Experiments in vitro and in vivo have demonstrated correlations between inhibitory and excitatory synaptic inputs in which inhibition lags behind excitation in cortical neurons. This delay arises in feed-forward inhibition (FFI) circuits and ensures that coincident excitation and inhibition do not preclude neuronal firing. Conversely, inhibition that is too delayed broadens neuronal integration times, thereby diminishing spike-time precision and increasing the firing frequency. This led us to hypothesize that the correlation between excitatory and inhibitory synaptic inputs modulates the encoding of information of neural spike trains. We tested this hypothesis by investigating the effect of such correlations on the information rate (IR) of spike trains using the Hodgkin-Huxley model in which both synaptic and membrane conductances are stochastic. We investigated two different synaptic input regimes: balanced synaptic conductances and balanced currents. Our results show that correlations arising from the synaptic kinetics, τ, and millisecond lags, δ, of inhibition relative to excitation strongly affect the IR of spike trains. In the regime of balanced synaptic currents, for short time lags (δ ~ 1 ms) there is an optimal τ that maximizes the IR of the postsynaptic spike train. Given the short time scales for monosynaptic inhibitory lags and synaptic decay kinetics reported in cortical neurons under physiological contexts, we propose that FFI in cortical circuits is poised to maximize the rate of information transfer between cortical neurons. Our results also provide a possible explanation for how certain drugs and genetic mutations affecting the synaptic kinetics can deteriorate information processing in the brain.
doi:10.3389/fncom.2014.00059
PMCID: PMC4047963  PMID: 24936182
stochastic Hodgkin-Huxley model; synaptic kinetics; input correlation; information; feed-forward inhibition
21.  Dissection of cortical microcircuits by single-neuron stimulation in vivo 
Current biology : CB  2012;22(16):1459-1467.
Summary
Background
A fundamental process underlying all brain functions is the propagation of spiking activity in networks of excitatory and inhibitory neurons. In the neocortex, although functional connections between pairs of neurons have been studied extensively in brain slices, they remain poorly characterized in vivo, where the high background activity, global brain states, and neuromodulation can powerfully influence synaptic transmission. To understand how spikes are transmitted in cortical circuits in vivo, we used two-photon calcium imaging to monitor ensemble activity and targeted patching to stimulate a single neuron in mouse visual cortex.
Results
Burst spiking of a single pyramidal neuron can drive spiking activity in both excitatory and inhibitory neurons within a ~100 µm radius. For inhibitory neurons, ~30% of the somatostatin interneurons fire reliably in response to a presynaptic burst of ≥ 5 spikes. In contrast, parvalbumin interneurons showed no detectable responses to single-neuron stimulation, but their spiking is highly correlated with the local network activity.
Conclusions
Our results demonstrate the feasibility of mapping functional connectivity at cellular resolution in vivo and reveal distinct operations of two major inhibitory circuits, one detecting single-neuron spike bursts and the other reflecting distributed network activity.
doi:10.1016/j.cub.2012.06.007
PMCID: PMC3467311  PMID: 22748320
22.  Desynchronization of Neocortical Networks by Asynchronous Release of GABA at Autaptic and Synaptic Contacts from Fast-Spiking Interneurons 
PLoS Biology  2010;8(9):e1000492.
An activity-dependent long-lasting asynchronous release of GABA from identified fast-spiking inhibitory neurons in the neocortex can impair the reliability and temporal precision of activity in a cortical network.
Networks of specific inhibitory interneurons regulate principal cell firing in several forms of neocortical activity. Fast-spiking (FS) interneurons are potently self-inhibited by GABAergic autaptic transmission, allowing them to precisely control their own firing dynamics and timing. Here we show that in FS interneurons, high-frequency trains of action potentials can generate a delayed and prolonged GABAergic self-inhibition due to sustained asynchronous release at FS-cell autapses. Asynchronous release of GABA is simultaneously recorded in connected pyramidal (P) neurons. Asynchronous and synchronous autaptic release show differential presynaptic Ca2+ sensitivity, suggesting that they rely on different Ca2+ sensors and/or involve distinct pools of vesicles. In addition, asynchronous release is modulated by the endogenous Ca2+ buffer parvalbumin. Functionally, asynchronous release decreases FS-cell spike reliability and reduces the ability of P neurons to integrate incoming stimuli into precise firing. Since each FS cell contacts many P neurons, asynchronous release from a single interneuron may desynchronize a large portion of the local network and disrupt cortical information processing.
Author Summary
In the cerebral cortex (neocortex) of the brain, fast-spiking (FS) inhibitory cells contact many principal pyramidal (P) neurons on their cell bodies, which allows the FS cells to control the generation of action potentials (neuronal output). FS-cell-mediated rhythmic and synchronous inhibition drives coherent network oscillations of large ensembles of P neurons, indicating that FS interneurons are needed for the precise timing of cortical circuits. Interestingly, FS cells are self-innervated by GABAergic autaptic contacts, whose synchronous activation regulates FS-cell precise firing. Here we report that high-frequency firing in FS interneurons results in a massive (>10-fold), delayed, and prolonged (for seconds) increase in inhibitory events, occurring at both autaptic (FS–FS) and synaptic (FS–P) sites. This increased inhibition is due to asynchronous release of GABA from presynaptic FS cells. Delayed and disorganized asynchronous inhibitory responses significantly affected the input–output properties of both FS and P neurons, suggesting that asynchronous release of GABA might promote network desynchronization. FS interneurons can fire at high frequency (>100 Hz) in vitro and in vivo, and are known for their reliable and precise signaling. Our results show an unprecedented action of these cells, by which their tight temporal control of cortical circuits can be broken when they are driven to fire above certain frequencies.
doi:10.1371/journal.pbio.1000492
PMCID: PMC2946936  PMID: 20927409
23.  Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity 
PLoS Computational Biology  2013;9(4):e1003037.
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
Author Summary
How do neurons learn to extract information from their inputs, and perform meaningful computations? Neurons receive inputs as continuous streams of action potentials or “spikes” that arrive at thousands of synapses. The strength of these synapses - the synaptic weight - undergoes constant modification. It has been demonstrated in numerous experiments that this modification depends on the temporal order of spikes in the pre- and postsynaptic neuron, a rule known as STDP, but it has remained unclear, how this contributes to higher level functions in neural network architectures. In this paper we show that STDP induces in a commonly found connectivity motif in the cortex - a winner-take-all (WTA) network - autonomous, self-organized learning of probabilistic models of the input. The resulting function of the neural circuit is Bayesian computation on the input spike trains. Such unsupervised learning has previously been studied extensively on an abstract, algorithmical level. We show that STDP approximates one of the most powerful learning methods in machine learning, Expectation-Maximization (EM). In a series of computer simulations we demonstrate that this enables STDP in WTA circuits to solve complex learning tasks, reaching a performance level that surpasses previous uses of spiking neural networks.
doi:10.1371/journal.pcbi.1003037
PMCID: PMC3636028  PMID: 23633941
24.  Spike Timing Regulation on the Millisecond Scale by Distributed Synaptic Plasticity at the Cerebellum Input Stage: A Simulation Study 
The way long-term synaptic plasticity regulates neuronal spike patterns is not completely understood. This issue is especially relevant for the cerebellum, which is endowed with several forms of long-term synaptic plasticity and has been predicted to operate as a timing and a learning machine. Here we have used a computational model to simulate the impact of multiple distributed synaptic weights in the cerebellar granular-layer network. In response to mossy fiber (MF) bursts, synaptic weights at multiple connections played a crucial role to regulate spike number and positioning in granule cells. The weight at MF to granule cell synapses regulated the delay of the first spike and the weight at MF and parallel fiber to Golgi cell synapses regulated the duration of the time-window during which the first-spike could be emitted. Moreover, the weights of synapses controlling Golgi cell activation regulated the intensity of granule cell inhibition and therefore the number of spikes that could be emitted. First-spike timing was regulated with millisecond precision and the number of spikes ranged from zero to three. Interestingly, different combinations of synaptic weights optimized either first-spike timing precision or spike number, efficiently controlling transmission and filtering properties. These results predict that distributed synaptic plasticity regulates the emission of quasi-digital spike patterns on the millisecond time-scale and allows the cerebellar granular layer to flexibly control burst transmission along the MF pathway.
doi:10.3389/fncom.2013.00064
PMCID: PMC3660969  PMID: 23720626
spiking network; spike timing; cerebellum; granular layer; LTP; LTD
25.  LTS and FS Inhibitory Interneurons, Short-Term Synaptic Plasticity, and Cortical Circuit Dynamics 
PLoS Computational Biology  2011;7(10):e1002248.
Somatostatin-expressing, low threshold-spiking (LTS) cells and fast-spiking (FS) cells are two common subtypes of inhibitory neocortical interneuron. Excitatory synapses from regular-spiking (RS) pyramidal neurons to LTS cells strongly facilitate when activated repetitively, whereas RS-to-FS synapses depress. This suggests that LTS neurons may be especially relevant at high rate regimes and protect cortical circuits against over-excitation and seizures. However, the inhibitory synapses from LTS cells usually depress, which may reduce their effectiveness at high rates. We ask: by which mechanisms and at what firing rates do LTS neurons control the activity of cortical circuits responding to thalamic input, and how is control by LTS neurons different from that of FS neurons? We study rate models of circuits that include RS cells and LTS and FS inhibitory cells with short-term synaptic plasticity. LTS neurons shift the RS firing-rate vs. current curve to the right at high rates and reduce its slope at low rates; the LTS effect is delayed and prolonged. FS neurons always shift the curve to the right and affect RS firing transiently. In an RS-LTS-FS network, FS neurons reach a quiescent state if they receive weak input, LTS neurons are quiescent if RS neurons receive weak input, and both FS and RS populations are active if they both receive large inputs. In general, FS neurons tend to follow the spiking of RS neurons much more closely than LTS neurons. A novel type of facilitation-induced slow oscillations is observed above the LTS firing threshold with a frequency determined by the time scale of recovery from facilitation. To conclude, contrary to earlier proposals, LTS neurons affect the transient and steady state responses of cortical circuits over a range of firing rates, not only during the high rate regime; LTS neurons protect against over-activation about as well as FS neurons.
Author Summary
The brain consists of circuits of neurons that signal to one another via synapses. There are two classes of neurons: excitatory cells, which cause other neurons to become more active, and inhibitory neurons, which cause other neurons to become less active. It is thought that the activity of excitatory neurons is kept in check largely by inhibitory neurons; when such an inhibitory “brake” fails, a seizure can result. Inhibitory neurons of the low-threshold spiking (LTS) subtype can potentially fulfill this braking, or anticonvulsant, role because the synaptic input to these neurons facilitates, i.e., those neurons are active when excitatory neurons are strongly active. Using a computational model we show that, because the synaptic output of LTS neurons onto excitatory neurons depresses (decreases with activity), the ability of LTS neurons to prevent strong cortical activity and seizures is not qualitatively larger than that of inhibitory neurons of another subtype, the fast-spiking (FS) cells. Furthermore, short-term (∼one second) changes in the strength of synapses to and from LTS interneurons allow them to shape the behavior of cortical circuits even at modest rates of activity, and an RS-LTS-FS circuit is capable of producing slow oscillations, on the time scale of these short-term changes.
doi:10.1371/journal.pcbi.1002248
PMCID: PMC3203067  PMID: 22046121

Results 1-25 (1184230)