Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding.
A fundamental process underlying all brain functions is the propagation of spiking activity in networks of excitatory and inhibitory neurons. In the neocortex, although functional connections between pairs of neurons have been studied extensively in brain slices, they remain poorly characterized in vivo, where the high background activity, global brain states, and neuromodulation can powerfully influence synaptic transmission. To understand how spikes are transmitted in cortical circuits in vivo, we used two-photon calcium imaging to monitor ensemble activity and targeted patching to stimulate a single neuron in mouse visual cortex.
Burst spiking of a single pyramidal neuron can drive spiking activity in both excitatory and inhibitory neurons within a ~100 µm radius. For inhibitory neurons, ~30% of the somatostatin interneurons fire reliably in response to a presynaptic burst of ≥ 5 spikes. In contrast, parvalbumin interneurons showed no detectable responses to single-neuron stimulation, but their spiking is highly correlated with the local network activity.
Our results demonstrate the feasibility of mapping functional connectivity at cellular resolution in vivo and reveal distinct operations of two major inhibitory circuits, one detecting single-neuron spike bursts and the other reflecting distributed network activity.
A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies ,  have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP) synapses (so, no long-term potentiation, LTP, or depression, LTD, was included). However, elevated pre-phases (burst leaders) and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.
Coordinated spiking activity in neuronal ensembles, in local networks and across multiple cortical areas, is thought to provide the neural basis for cognition and adaptive behavior. Examining such collective dynamics at the level of single neuron spikes has remained, however, a considerable challenge. We found that the spiking history of small and randomly sampled ensembles (~20–200 neurons) could predict subsequent single neuron spiking with substantial accuracy in the sensorimotor cortex of humans and nonhuman behaving primates. Furthermore, spiking was better predicted by the ensemble’s history than by the ensemble’s instantaneous state (Ising models), emphasizing the role of temporal dynamics leading to spiking. Notably, spiking could be predicted not only by local ensemble spiking histories, but also by spiking histories in different cortical areas. These strong collective dynamics may provide a basis for understanding cognition and adaptive behavior at the level of coordinated spiking in cortical networks.
The way long-term synaptic plasticity regulates neuronal spike patterns is not completely understood. This issue is especially relevant for the cerebellum, which is endowed with several forms of long-term synaptic plasticity and has been predicted to operate as a timing and a learning machine. Here we have used a computational model to simulate the impact of multiple distributed synaptic weights in the cerebellar granular-layer network. In response to mossy fiber (MF) bursts, synaptic weights at multiple connections played a crucial role to regulate spike number and positioning in granule cells. The weight at MF to granule cell synapses regulated the delay of the first spike and the weight at MF and parallel fiber to Golgi cell synapses regulated the duration of the time-window during which the first-spike could be emitted. Moreover, the weights of synapses controlling Golgi cell activation regulated the intensity of granule cell inhibition and therefore the number of spikes that could be emitted. First-spike timing was regulated with millisecond precision and the number of spikes ranged from zero to three. Interestingly, different combinations of synaptic weights optimized either first-spike timing precision or spike number, efficiently controlling transmission and filtering properties. These results predict that distributed synaptic plasticity regulates the emission of quasi-digital spike patterns on the millisecond time-scale and allows the cerebellar granular layer to flexibly control burst transmission along the MF pathway.
spiking network; spike timing; cerebellum; granular layer; LTP; LTD
Spike timing-dependent plasticity (STDP) is a computationally powerful form of plasticity in which synapses are strengthened or weakened according to the temporal order and precise millisecond-scale delay between presynaptic and postsynaptic spiking activity. STDP is readily observed in vitro, but evidence for STDP in vivo is scarce. Here, we studied spike timing-dependent synaptic depression in single putative pyramidal neurons of the rat primary somatosensory cortex (S1) in vivo, using two techniques. First, we recorded extracellularly from layer 2/3 (L2/3) and L5 neurons, and paired spontaneous action potentials (postsynaptic spikes) with subsequent subthreshold deflection of one whisker (to drive presynaptic afferents to the recorded neuron) to produce “post-leading-pre” spike pairings at known delays. Short delay pairings (<17 ms) resulted in a significant decrease of the extracellular spiking response specific to the paired whisker, consistent with spike timing-dependent synaptic depression. Second, in whole-cell recordings from neurons in L2/3, we paired postsynaptic spikes elicited by direct-current injection with subthreshold whisker deflection to drive presynaptic afferents to the recorded neuron at precise temporal delays. Post-leading-pre pairing (<33 ms delay) decreased the slope and amplitude of the PSP evoked by the paired whisker, whereas “pre-leading-post” delays failed to produce depression, and sometimes produced potentiation of whisker-evoked PSPs. These results demonstrate that spike timing-dependent synaptic depression occurs in S1 in vivo, and is therefore a plausible plasticity mechanism in the sensory cortex.
spike-timing dependent plasticity; STDP; somatosensory cortex; plasticity; rat; synaptic depression; LTP; LTD; barrel
We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available.
visual cortex; spiking neurons; STDP; short-term plasticity; simulation; computational neuroscience; software; GPU
Synfire chains, sequences of pools linked by feedforward connections, support the propagation of precisely timed spike sequences, or synfire waves. An important question remains, how synfire chains can efficiently be embedded in cortical architecture. We present a model of synfire chain embedding in a cortical scale recurrent network using conductance-based synapses, balanced chains, and variable transmission delays. The network attains substantially higher embedding capacities than previous spiking neuron models and allows all its connections to be used for embedding. The number of waves in the model is regulated by recurrent background noise. We computationally explore the embedding capacity limit, and use a mean field analysis to describe the equilibrium state. Simulations confirm the mean field analysis over broad ranges of pool sizes and connectivity levels; the number of pools embedded in the system trades off against the firing rate and the number of waves. An optimal inhibition level balances the conflicting requirements of stable synfire propagation and limited response to background noise. A simplified analysis shows that the present conductance-based synapses achieve higher contrast between the responses to synfire input and background noise compared to current-based synapses, while regulation of wave numbers is traced to the use of variable transmission delays.
Recurrent network dynamics; Feedforward network; Synchrony; Synaptic conductance; Synfire chain; Storage capacity
A central tenet of most theories of synaptic modification during cortical development is that correlated activity drives plasticity in synaptically connected neurons. Unexpectedly, however, using sensory-evoked activity patterns recorded from the developing mouse cortex in vivo, the synaptic learning rule that we uncover here relies solely on the presynaptic neuron. A burst of three presynaptic spikes followed, within a restricted time window, by a single presynaptic spike induces robust long-term depression (LTD) at developing layer 4 to layer 2/3 synapses. This presynaptic spike pattern-dependent LTD (p-LTD) can be induced by individual presynaptic layer 4 cells, requires presynaptic NMDA receptors and calcineurin, and is expressed presynaptically. However, in contrast to spike timing-dependent LTD, p-LTD is independent of postsynaptic and astroglial signaling. This spike pattern-dependent learning rule complements timing-based rules and is likely to play a role in the pruning of synaptic input during cortical development.
► Natural spike patterns in layer 4 neurons induce LTD at downstream synapses ► Spike pattern-dependent LTD can be induced in individual presynaptic neurons ► Spike pattern-dependent LTD requires presynaptic NMDA receptors and calcineurin ► Spike pattern-dependent LTD is independent of postsynaptic and astroglial signaling
Using natural spike patterns recorded from cortical layer 4 neurons in vivo, Rodríguez-Moreno et al. uncover a new spike pattern-dependent synaptic learning rule. They find that individual presynaptic neurons can drive NMDA receptor-dependent synaptic depression without a requirement for postsynaptic activity.
Spike-timing-dependent synaptic plasticity (STDP) is a simple and effective learning rule for sequence learning. However, synapses being subject to STDP rules are readily influenced in noisy circumstances because synaptic conductances are modified by pre- and postsynaptic spikes elicited within a few tens of milliseconds, regardless of whether those spikes convey information or not. Noisy firing existing everywhere in the brain may induce irrelevant enhancement of synaptic connections through STDP rules and would result in uncertain memory encoding and obscure memory patterns. We will here show that the LTD windows of the STDP rules enable robust sequence learning amid background noise in cooperation with a large signal transmission delay between neurons and a theta rhythm, using a network model of the entorhinal cortex layer II with entorhinal-hippocampal loop connections. The important element of the present model for robust sequence learning amid background noise is the symmetric STDP rule having LTD windows on both sides of the LTP window, in addition to the loop connections having a large signal transmission delay and the theta rhythm pacing activities of stellate cells. Above all, the LTD window in the range of positive spike-timing is important to prevent influences of noise with the progress of sequence learning.
STDP; LTD window; Noise; Sequence learning; Entorhinal cortex; Entorhinal-hippocampal loop circuitry; Large transmission delay; Theta rhythm
We study the storage and retrieval of phase-coded patterns as stable dynamical attractors in recurrent neural networks, for both an analog and a integrate and fire spiking model. The synaptic strength is determined by a learning rule based on spike-time-dependent plasticity, with an asymmetric time window depending on the relative timing between pre and postsynaptic activity. We store multiple patterns and study the network capacity. For the analog model, we find that the network capacity scales linearly with the network size, and that both capacity and the oscillation frequency of the retrieval state depend on the asymmetry of the learning time window. In addition to fully connected networks, we study sparse networks, where each neuron is connected only to a small number z ≪ N of other neurons. Connections can be short range, between neighboring neurons placed on a regular lattice, or long range, between randomly chosen pairs of neurons. We find that a small fraction of long range connections is able to amplify the capacity of the network. This imply that a small-world-network topology is optimal, as a compromise between the cost of long range connections and the capacity increase. Also in the spiking integrate and fire model the crucial result of storing and retrieval of multiple phase-coded patterns is observed. The capacity of the fully-connected spiking network is investigated, together with the relation between oscillation frequency of retrieval state and window asymmetry.
phase-coding; associative memory; storage capacity; replay; STDP
Most neuronal networks, even in the absence of external stimuli, produce spontaneous bursts of spikes separated by periods of reduced activity. The origin and functional role of these neuronal events are still unclear. The present work shows that the spontaneous activity of two very different networks, intact leech ganglia and dissociated cultures of rat hippocampal neurons, share several features. Indeed, in both networks: i) the inter-spike intervals distribution of the spontaneous firing of single neurons is either regular or periodic or bursting, with the fraction of bursting neurons depending on the network activity; ii) bursts of spontaneous spikes have the same broad distributions of size and duration; iii) the degree of correlated activity increases with the bin width, and the power spectrum of the network firing rate has a 1/f behavior at low frequencies, indicating the existence of long-range temporal correlations; iv) the activity of excitatory synaptic pathways mediated by NMDA receptors is necessary for the onset of the long-range correlations and for the presence of large bursts; v) blockage of inhibitory synaptic pathways mediated by GABAA receptors causes instead an increase in the correlation among neurons and leads to a burst distribution composed only of very small and very large bursts. These results suggest that the spontaneous electrical activity in neuronal networks with different architectures and functions can have very similar properties and common dynamics.
Experimental studies of neuronal cultures have revealed a wide variety of spiking network activity ranging from sparse, asynchronous firing to distinct, network-wide synchronous bursting. However, the functional mechanisms driving these observed firing patterns are not well understood. In this work, we develop an in silico network of cortical neurons based on known features of similar in vitro networks. The activity from these simulations is found to closely mimic experimental data. Furthermore, the strength or degree of network bursting is found to depend on a few parameters: the density of the culture, the type of synaptic connections, and the ratio of excitatory to inhibitory connections. Network bursting gradually becomes more prominent as either the density, the fraction of long range connections, or the fraction of excitatory neurons is increased. Interestingly, biologically prevalent values of parameters result in networks that are at the transition between strong bursting and sparse firing. Using principal components analysis, we show that a large fraction of the variance in firing rates is captured by the first component for bursting networks. These results have implications for understanding how information is encoded at the population level as well as for why certain network parameters are ubiquitous in cortical tissue.
neuronal cultures; bursting; computer simulations; synchrony; small-world networks
Self-organized criticality refers to the spontaneous emergence of self-similar dynamics in complex systems poised between order and randomness. The presence of self-organized critical dynamics in the brain is theoretically appealing and is supported by recent neurophysiological studies. Despite this, the neurobiological determinants of these dynamics have not been previously sought. Here, we systematically examined the influence of such determinants in hierarchically modular networks of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity and axonal conduction delays. We characterized emergent dynamics in our networks by distributions of active neuronal ensemble modules (neuronal avalanches) and rigorously assessed these distributions for power-law scaling. We found that spike-timing-dependent synaptic plasticity enabled a rapid phase transition from random subcritical dynamics to ordered supercritical dynamics. Importantly, modular connectivity and low wiring cost broadened this transition, and enabled a regime indicative of self-organized criticality. The regime only occurred when modular connectivity, low wiring cost and synaptic plasticity were simultaneously present, and the regime was most evident when between-module connection density scaled as a power-law. The regime was robust to variations in other neurobiologically relevant parameters and favored systems with low external drive and strong internal interactions. Increases in system size and connectivity facilitated internal interactions, permitting reductions in external drive and facilitating convergence of postsynaptic-response magnitude and synaptic-plasticity learning rate parameter values towards neurobiologically realistic levels. We hence infer a novel association between self-organized critical neuronal dynamics and several neurobiologically realistic features of structural connectivity. The central role of these features in our model may reflect their importance for neuronal information processing.
The intricate relationship between structural brain connectivity and functional brain activity is an important and intriguing research area. Brain structure (the pattern of neuroanatomical connections) is thought to strongly influence and constrain brain function (the pattern of neuronal activations). Concurrently, brain function is thought to gradually reshape brain structure, through processes such as activity-dependent plasticity (the “what fires together, wires together” principle). In this study, we examined the relationship between brain structure and function in a biologically realistic mathematical model. More specifically, we considered the relationship between realistic features of brain structure, such as self-similar organization of specialized brain regions at multiple spatial scales (hierarchical modularity) and realistic features of brain activity, such as self-similar complex dynamics poised between order and randomness (self-organized criticality). We found a direct association between these structural and functional features in our model. This association only occurred in the presence of activity-dependent plasticity, and may reflect the importance of the corresponding structural and functional features in neuronal information processing.
Avian nucleus isthmi pars parvocellularis (Ipc) neurons are reciprocally connected with the layer 10 (L10) neurons in the optic tectum and respond with oscillatory bursts to visual stimulation. Our in vitro experiments show that both neuron types respond with regular spiking to somatic current injection and that the feedforward and feedback synaptic connections are excitatory, but of different strength and time course. To elucidate mechanisms of oscillatory bursting in this network of regularly spiking neurons, we investigated an experimentally constrained model of coupled leaky integrate-and-fire neurons with spike-rate adaptation. The model reproduces the observed Ipc oscillatory bursting in response to simulated visual stimulation. A scan through the model parameter volume reveals that Ipc oscillatory burst generation can be caused by strong and brief feedforward synaptic conductance changes. The mechanism is sensitive to the parameter values of spike-rate adaptation. In conclusion, we show that a network of regular-spiking neurons with feedforward excitation and spike-rate adaptation can generate oscillatory bursting in response to a constant input.
Developed biological systems are endowed with the ability of interacting with the environment; they sense the external state and react to it by changing their own internal state. Many attempts have been made to build ‘hybrids’ with the ability of perceiving, modifying and reacting to external modifications. Investigation of the rules that govern network changes in a hybrid system may lead to finding effective methods for ‘programming’ the neural tissue toward a desired task. Here we show a new perspective in the use of cortical neuronal cultures from embryonic mouse as a working platform to study targeted synaptic modifications. Differently from the common timing-based methods applied in bio-hybrids robotics, here we evaluated the importance of endogenous spike timing in the information processing. We characterized the influence of a spike-patterned stimulus in determining changes in neuronal synchronization (connectivity strength and precision) of the evoked spiking and bursting activity in the network. We show that tailoring the stimulation pattern upon a neuronal spike timing induces the network to respond stronger and more precisely to the stimulation. Interestingly, the induced modifications are conveyed more consistently in the burst timing. This increase in strength and precision may be a key in the interaction of the network with the external world and may be used to induce directional changes in bio-hybrid systems.
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Unraveling the general organizing principles of connectivity in neural circuits is a crucial step towards understanding brain function. However, even the simpler task of assessing the global excitatory connectivity of a culture in vitro, where neurons form self-organized networks in absence of external stimuli, remains challenging. Neuronal cultures undergo spontaneous switching between episodes of synchronous bursting and quieter inter-burst periods. We introduce here a novel algorithm which aims at inferring the connectivity of neuronal cultures from calcium fluorescence recordings of their network dynamics. To achieve this goal, we develop a suitable generalization of Transfer Entropy, an information-theoretic measure of causal influences between time series. Unlike previous algorithmic approaches to reconstruction, Transfer Entropy is data-driven and does not rely on specific assumptions about neuronal firing statistics or network topology. We generate simulated calcium signals from networks with controlled ground-truth topology and purely excitatory interactions and show that, by restricting the analysis to inter-bursts periods, Transfer Entropy robustly achieves a good reconstruction performance for disparate network connectivities. Finally, we apply our method to real data and find evidence of non-random features in cultured networks, such as the existence of highly connected hub excitatory neurons and of an elevated (but not extreme) level of clustering.
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
neural networks; population coding; singular value decomposition; non-negative matrix factorization; layer 2/3; network dynamics; network structure
Synchronization of globus pallidus (GP) neurons and cortically-entrained oscillations between GP and other basal ganglia nuclei are key features of the pathophysiology of Parkinson's disease. Phase response curves (PRCs), which tabulate the effects of phasic inputs within a neuron's spike cycle on output spike timing, are efficient tools for predicting the emergence of synchronization in neuronal networks and entrainment to periodic input. In this study we apply physiologically realistic synaptic conductance inputs to a full morphological GP neuron model to determine the phase response properties of the soma and different regions of the dendritic tree. We find that perisomatic excitatory inputs delivered throughout the inter-spike interval advance the phase of the spontaneous spike cycle yielding a type I PRC. In contrast, we demonstrate that distal dendritic excitatory inputs can either delay or advance the next spike depending on whether they occur early or late in the spike cycle. We find this latter pattern of responses, summarized by a biphasic (type II) PRC, was a consequence of dendritic activation of the small conductance calcium-activated potassium current, SK. We also evaluate the spike-frequency dependence of somatic and dendritic PRC shapes, and we demonstrate the robustness of our results to variations of conductance densities, distributions, and kinetic parameters. We conclude that the distal dendrite of GP neurons embodies a distinct dynamical subsystem that could promote synchronization of pallidal networks to excitatory inputs. These results highlight the need to consider different effects of perisomatic and dendritic inputs in the control of network behavior.
dendrite; SK current; synchronization; oscillation; basal ganglia; Parkinson's disease
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Cortical structures of the adult mammalian brain are characterized by a spectacular diversity of inhibitory interneurons, which use GABA as neurotransmitter. GABAergic neurotransmission is fundamental for integrating and filtering incoming information and dictating postsynaptic neuronal spike timing, therefore providing a tight temporal code used by each neuron, or ensemble of neurons, to perform sophisticated computational operations. However, the heterogeneity of cortical GABAergic cells is associated to equally diverse properties governing intrinsic excitability as well as strength, dynamic range, spatial extent, anatomical localization, and molecular components of inhibitory synaptic connections that they form with pyramidal neurons. Recent studies showed that similarly to their excitatory (glutamatergic) counterparts, also inhibitory synapses can undergo activity-dependent changes in their strength. Here, some aspects related to plasticity and modulation of adult cortical and hippocampal GABAergic synaptic transmission will be reviewed, aiming at providing a fresh perspective towards the elucidation of the role played by specific cellular elements of cortical microcircuits during both physiological and pathological operations.
Coordinated reset (CR) stimulation is a desynchronizing stimulation technique based on timely coordinated phase resets of sub-populations of a synchronized neuronal ensemble. It has initially been computationally developed for electrical deep brain stimulation (DBS), to enable an effective desynchronization and unlearning of pathological synchrony and connectivity (anti-kindling). Here we computationally show for ensembles of spiking and bursting model neurons interacting via excitatory and inhibitory adaptive synapses that a phase reset of neuronal populations as well as a desynchronization and an anti-kindling can robustly be achieved by direct electrical stimulation or indirect (synaptically-mediated) excitatory and inhibitory stimulation. Our findings are relevant for DBS as well as for sensory stimulation in neurological disorders characterized by pathological neuronal synchrony. Based on the obtained results, we may expect that the local effects in the vicinity of a depth electrode (realized by direct stimulation of the neurons' somata or stimulation of axon terminals) and the non-local CR effects (realized by stimulation of excitatory or inhibitory efferent fibers) of deep brain CR neuromodulation may be similar or even identical. Furthermore, our results indicate that an effective desynchronization and anti-kindling can even be achieved by non-invasive, sensory CR neuromodulation. We discuss the concept of sensory CR neuromodulation in the context of neurological disorders.
coordinated reset neuromodulation; neuronal synchronization; electrical stimulation; sensory stimulation; spike timing-dependent plasticity; anti-kindling
Spike synchronization is thought to have a constructive role for feature integration, attention, associative learning, and the formation of bidirectionally connected Hebbian cell assemblies. By contrast, theoretical studies on spike-timing-dependent plasticity (STDP) report an inherently decoupling influence of spike synchronization on synaptic connections of coactivated neurons. For example, bidirectional synaptic connections as found in cortical areas could be reproduced only by assuming realistic models of STDP and rate coding. We resolve this conflict by theoretical analysis and simulation of various simple and realistic STDP models that provide a more complete characterization of conditions when STDP leads to either coupling or decoupling of neurons firing in synchrony. In particular, we show that STDP consistently couples synchronized neurons if key model parameters are matched to physiological data: First, synaptic potentiation must be significantly stronger than synaptic depression for small (positive or negative) time lags between presynaptic and postsynaptic spikes. Second, spike synchronization must be sufficiently imprecise, for example, within a time window of 5–10 ms instead of 1 ms. Third, axonal propagation delays should not be much larger than dendritic delays. Under these assumptions synchronized neurons will be strongly coupled leading to a dominance of bidirectional synaptic connections even for simple STDP models and low mean firing rates at the level of spontaneous activity.
Hebbian cell assemblies; learning; memory; spike synchronization; STDP; synaptic connectivity; synaptic plasticity
Cortical microcircuits are nonrandomly wired by neurons. As a natural consequence, spikes emitted by microcircuits are also nonrandomly patterned in time and space. One of the prominent spike organizations is a repetition of fixed patterns of spike series across multiple neurons. However, several questions remain unsolved, including how precisely spike sequences repeat, how the sequences are spatially organized, how many neurons participate in sequences, and how different sequences are functionally linked. To address these questions, we monitored spontaneous spikes of hippocampal CA3 neurons ex vivo using a high-speed functional multineuron calcium imaging (fMCI) technique that allowed us to monitor spikes with millisecond resolution and to record the location of spiking and non-spiking neurons. Multineuronal spike sequences (MSSs) were overrepresented in spontaneous activity compared to the statistical chance level. Approximately 75% of neurons participated in at least one sequence during our observation period. The participants were sparsely dispersed and did not show specific spatial organization. The number of sequences relative to the chance level decreased when larger time frames were used to detect sequences. Thus, sequences were precise at the millisecond level. Sequences often shared common spikes with other sequences; parts of sequences were subsequently relayed by following sequences, generating complex chains of multiple sequences.
spontaneous activity; calcium imaging; action potentials; spike sequences; hippocampus; ripple