Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Spike-Timing Dependent Plasticity (STDP) is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel – the “temporally asymmetric Hebbian” learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.
Over successive stages, the ventral visual system of the primate brain develops neurons that respond selectively to particular objects or faces with translation, size and view invariance. The powerful neural representations found in Inferotemporal cortex form a remarkably rapid and robust basis for object recognition which belies the difficulties faced by the system when learning in natural visual environments. A central issue in understanding the process of biological object recognition is how these neurons learn to form separate representations of objects from complex visual scenes composed of multiple objects. We show how a one-layer competitive network comprised of ‘spiking’ neurons is able to learn separate transformation-invariant representations (exemplified by one-dimensional translations) of visual objects that are always seen together moving in lock-step, but separated in space. This is achieved by combining ‘Mexican hat’ functional lateral connectivity with cell firing-rate adaptation to temporally segment input representations of competing stimuli through anti-phase oscillations (perceptual cycles). These spiking dynamics are quickly and reliably generated, enabling selective modification of the feed-forward connections to neurons in the next layer through Spike-Time-Dependent Plasticity (STDP), resulting in separate translation-invariant representations of each stimulus. Variations in key properties of the model are investigated with respect to the network’s ability to develop appropriate input representations and subsequently output representations through STDP. Contrary to earlier rate-coded models of this learning process, this work shows how spiking neural networks may learn about more than one stimulus together without suffering from the ‘superposition catastrophe’. We take these results to suggest that spiking dynamics are key to understanding biological visual object recognition.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
In vivo neural responses to stimuli are known to have a lot of variability across trials. If the same number of spikes is emitted from trial to trial, the neuron is said to be reliable. If the timing of such spikes is roughly preserved across trials, the neuron is said to be precise. Here we demonstrate both analytically and numerically that the well-established Hebbian learning rule of spike-timing-dependent plasticity (STDP) can learn response patterns despite relatively low reliability (Poisson-like variability) and low temporal precision (10–20 ms). These features are in line with many experimental observations, in which a poststimulus time histogram (PSTH) is evaluated over multiple trials. In our model, however, information is extracted from the relative spike times between afferents without the need of an absolute reference time, such as a stimulus onset. Relevantly, recent experiments show that relative timing is often more informative than the absolute timing. Furthermore, the scope of application for our study is not restricted to sensory systems. Taken together, our results suggest a fine temporal resolution for the neural code, and that STDP is an appropriate candidate for encoding and decoding such activity.
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.
The complex circuitry in the cerebral cortex is characterized by bottom-up connections, which carry feedforward information from the sensory periphery to higher areas, and top-down connections, where the information flow is reversed. Changes over time in the strength of synaptic connections between neurons underlie development, learning and memory. A fundamental mechanism to change synaptic strength is spike timing dependent plasticity, whereby synapses are strengthened whenever pre-synaptic spikes shortly precede post-synaptic spikes and are weakened otherwise; the relative timing of spikes therefore dictates the direction of plasticity. Spike timing dependent plasticity has been observed in multiple species and different brain areas. Here, we argue that top-down connections obey a learning rule with a reversed temporal dependence, which we call reverse spike timing dependent plasticity. We use mathematical analysis and computational simulations to show that this reverse time learning rule, and not previous learning rules, leads to a biologically plausible connectivity pattern with stable synaptic strengths. This reverse time learning rule is supported by recent neuroanatomical and neurophysiological experiments and can explain empirical observations about the development and function of top-down synapses in the brain.
We consider and analyze the influence of spike-timing dependent plasticity (STDP) on homeostatic states in synaptically coupled neuronal oscillators. In contrast to conventional models of STDP in which spike-timing affects weights of synaptic connections, we consider a model of STDP in which the time lags between pre- and/or post-synaptic spikes change internal state of pre- and/or post-synaptic neurons respectively. The analysis reveals that STDP processes of this type, modeled by a single ordinary differential equation, may ensure efficient, yet coarse, phase-locking of spikes in the system to a given reference phase. Precision of the phase locking, i.e. the amplitude of relative phase deviations from the reference, depends on the values of natural frequencies of oscillators and, additionally, on parameters of the STDP law. These deviations can be optimized by appropriate tuning of gains (i.e. sensitivity to spike-timing mismatches) of the STDP mechanism. However, as we demonstrate, such deviations can not be made arbitrarily small neither by mere tuning of STDP gains nor by adjusting synaptic weights. Thus if accurate phase-locking in the system is required then an additional tuning mechanism is generally needed. We found that adding a very simple adaptation dynamics in the form of slow fluctuations of the base line in the STDP mechanism enables accurate phase tuning in the system with arbitrary high precision. Adaptation operating at a slow time scale may be associated with extracellular matter such as matrix and glia. Thus the findings may suggest a possible role of the latter in regulating synaptic transmission in neuronal circuits.
The basolateral complex of the amygdala (BLA) is a critical component of the neural circuit regulating fear learning. During fear learning and recall, the amygdala and other brain regions, including the hippocampus and prefrontal cortex, exhibit phase-locked oscillations in the high delta/low theta frequency band (∼2–6 Hz) that have been shown to contribute to the learning process. Network oscillations are commonly generated by inhibitory synaptic input that coordinates action potentials in groups of neurons. In the rat BLA, principal neurons spontaneously receive synchronized, inhibitory input in the form of compound, rhythmic, inhibitory postsynaptic potentials (IPSPs), likely originating from burst-firing parvalbumin interneurons. Here we investigated the role of compound IPSPs in the rat and rhesus macaque BLA in regulating action potential synchrony and spike-timing precision. Furthermore, because principal neurons exhibit intrinsic oscillatory properties and resonance between 4 and 5 Hz, in the same frequency band observed during fear, we investigated whether compound IPSPs and intrinsic oscillations interact to promote rhythmic activity in the BLA at this frequency. Using whole-cell patch clamp in brain slices, we demonstrate that compound IPSPs, which occur spontaneously and are synchronized across principal neurons in both the rat and primate BLA, significantly improve spike-timing precision in BLA principal neurons for a window of ∼300 ms following each IPSP. We also show that compound IPSPs coordinate the firing of pairs of BLA principal neurons, and significantly improve spike synchrony for a window of ∼130 ms. Compound IPSPs enhance a 5 Hz calcium-dependent membrane potential oscillation (MPO) in these neurons, likely contributing to the improvement in spike-timing precision and synchronization of spiking. Activation of the cAMP-PKA signaling cascade enhanced the MPO, and inhibition of this cascade blocked the MPO. We discuss these results in the context of spike-timing dependent plasticity and modulation by neurotransmitters important for fear learning, such as dopamine.
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this “temporal stability” or “slowness” approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing–dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the “trace rule.” The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.
Neurons interact by exchanging information via small connection sites, so-called synapses. Interestingly, the efficiency of synapses in transmitting neuronal signals is not static, but changes dynamically depending on the signals that the associated neurons emit. As neurons receive thousands of synaptic input signals, they can thus “choose” the input signals they are interested in by adjusting their synapses accordingly. This adaptation mechanism, known as synaptic plasticity, has long been hypothesized to form the neuronal correlate of learning. It raises a difficult question: what aspects of the input signals are the neurons interested in, given that the adaptation of the synapses follows a certain mechanistic rule? We address this question for spike-timing–dependent plasticity, a type of synaptic plasticity that has raised a lot of interest in the last decade. We show that under certain assumptions regarding neuronal information transmission, spike-timing–dependent plasticity focuses on aspects of the input signals that vary slowly in time. This relates spike-timing–dependent plasticity to a class of abstract learning rules that were previously proposed as a means of learning to recognize objects in spite of contextual changes such as size or position. Based on this link, we propose a novel functional interpretation of spike-timing–dependent plasticity.
Spike-timing dependent plasticity (STDP), a widespread synaptic modification mechanism, is sensitive to correlations between presynaptic spike trains and it generates competition among synapses. However, STDP has an inherent instability because strong synapses are more likely to be strengthened than weak ones, causing them to grow in strength until some biophysical limit is reached. Through simulations and analytic calculations, we show that a small temporal shift in the STDP window that causes synchronous, or nearly synchronous, pre- and postsynaptic action potentials to induce long-term depression can stabilize synaptic strengths. Shifted STDP also stabilizes the postsynaptic firing rate and can implement both Hebbian and anti-Hebbian forms of competitive synaptic plasticity. Interestingly, the overall level of inhibition determines whether plasticity is Hebbian or anti-Hebbian. Even a random symmetric jitter of a few milliseconds in the STDP window can stabilize synaptic strengths while retaining these features. The same results hold for a shifted version of the more recent “triplet” model of STDP. Our results indicate that the detailed shape of the STDP window function near the transition from depression to potentiation is of the utmost importance in determining the consequences of STDP, suggesting that this region warrants further experimental study.
Synaptic plasticity is believed to be a fundamental mechanism of learning and memory. In spike-timing dependent synaptic plasticity (STDP), the temporal order of pre- and postsynaptic spiking across a synapse determines whether it is strengthened or weakened. STDP can induce competition between the different inputs synapsing onto a neuron, which is crucial for the formation of functional neuronal circuits. However, strong synaptic competition is often incompatible with inherent synaptic stability. Synaptic modification by STDP is controlled by a so-called temporal window function that determines how synaptic modification depends on spike timing. We show that a small shift, or random jitter, in the conventional temporal window function used for STDP that is compatible with the underlying molecular kinetics of STDP, can both stabilize synapses and maintain competition. The outcome of the competition is determined by the level of inhibitory input to the postsynaptic neuron. We conclude that the detailed shape of the temporal window function is critical in determining the functional consequences of STDP and thus deserves further experimental study.
Local field potential (LFP) oscillations are often accompanied by synchronization of activity within a widespread cerebral area. Thus, the LFP and neuronal coherence appear to be the result of a common mechanism that underlies neuronal assembly formation. We used the olfactory bulb as a model to investigate: (1) the extent to which unitary dynamics and LFP oscillations can be correlated and (2) the precision with which a model of the hypothesized underlying mechanisms can accurately explain the experimental data. For this purpose, we analyzed simultaneous recordings of mitral cell (MC) activity and LFPs in anesthetized and freely breathing rats in response to odorant stimulation. Spike trains were found to be phase-locked to the gamma oscillation at specific firing rates and to form odor-specific temporal patterns. The use of a conductance-based MC model driven by an approximately balanced excitatory-inhibitory input conductance and a relatively small inhibitory conductance that oscillated at the gamma frequency allowed us to provide one explanation of the experimental data via a mode-locking mechanism. This work sheds light on the way network and intrinsic MC properties participate in the locking of MCs to the gamma oscillation in a realistic physiological context and may result in a particular time-locked assembly. Finally, we discuss how a self-synchronization process with such entrainment properties can explain, under experimental conditions: (1) why the gamma bursts emerge transiently with a maximal amplitude position relative to the stimulus time course; (2) why the oscillations are prominent at a specific gamma frequency; and (3) why the oscillation amplitude depends on specific stimulus properties. We also discuss information processing and functional consequences derived from this mechanism.
Olfactory function relies on a chain of neural relays that extends from the periphery to the central nervous system and implies neural activity with various timescales. A central question in neuroscience is how information is encoded by the neural activity. In the mammalian olfactory bulb, local neural activity oscillations in the 40–80 Hz range (gamma) may influence the timing of individual neuron activities such that olfactory information may be encoded in this way. In this study, we first characterize in vivo the detailed activity of individual neurons relative to the oscillation and find that, depending on their state, neurons can exhibit periodic activity patterns. We also find, at least qualitatively, a relation between this activity and a particular odor. This is reminiscent of general physical phenomena—the entrainment by an oscillation—and to verify this hypothesis, in a second phase, we build a biologically realistic model mimicking these in vivo conditions. Our model confirms quantitatively this hypothesis and reveals that entrainment is maximal in the gamma range. Taken together, our results suggest that the neuronal activity may be specifically formatted in time during the gamma oscillation in such a way that it could, at this stage, encode the odor.
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
How do neurons learn to extract information from their inputs, and perform meaningful computations? Neurons receive inputs as continuous streams of action potentials or “spikes” that arrive at thousands of synapses. The strength of these synapses - the synaptic weight - undergoes constant modification. It has been demonstrated in numerous experiments that this modification depends on the temporal order of spikes in the pre- and postsynaptic neuron, a rule known as STDP, but it has remained unclear, how this contributes to higher level functions in neural network architectures. In this paper we show that STDP induces in a commonly found connectivity motif in the cortex - a winner-take-all (WTA) network - autonomous, self-organized learning of probabilistic models of the input. The resulting function of the neural circuit is Bayesian computation on the input spike trains. Such unsupervised learning has previously been studied extensively on an abstract, algorithmical level. We show that STDP approximates one of the most powerful learning methods in machine learning, Expectation-Maximization (EM). In a series of computer simulations we demonstrate that this enables STDP in WTA circuits to solve complex learning tasks, reaching a performance level that surpasses previous uses of spiking neural networks.
Spike timing-dependent plasticity (STDP) modifies synaptic strengths based on timing information available locally at each synapse. Despite this, it induces global structures within a recurrently connected network. We study such structures both through simulations and by analyzing the effects of STDP on pair-wise interactions of neurons. We show how conventional STDP acts as a loop-eliminating mechanism and organizes neurons into in- and out-hubs. Loop-elimination increases when depression dominates and turns into loop-generation when potentiation dominates. STDP with a shifted temporal window such that coincident spikes cause depression enhances recurrent connections and functions as a strict buffering mechanism that maintains a roughly constant average firing rate. STDP with the opposite temporal shift functions as a loop eliminator at low rates and as a potent loop generator at higher rates. In general, studying pairwise interactions of neurons provides important insights about the structures that STDP can produce in large networks.
The connectivity structure in neural networks reflects, at least in part, the long-term effects of synaptic plasticity mechanisms that underlie learning and memory. In one of the most widespread such mechanisms, spike-timing dependent plasticity (STDP), the temporal order of pre- and postsynaptic spiking across a synapse determines whether it is strengthened or weakened. Therefore, the synapses are modified solely based on local information through STDP. However, STDP can give rise to a variety of global connectivity structures in an interconnected neural network. Here, we provide an analytical framework that can predict the global structures that arise from STDP in such a network. The analytical technique we develop is actually quite simple, and involves the study of two interconnected neurons receiving inputs from their surrounding network. Following analytical calculations for a variety of different STDP models, we test and verify all our predictions through full network simulations. More importantly, the developed analytical tool will allow other researchers to figure out what arises from any other type of STDP in a network.
Synapse location, dendritic active properties and synaptic plasticity are all known to play some role in shaping the different input streams impinging onto a neuron. It remains unclear however, how the magnitude and spatial distribution of synaptic efficacies emerge from this interplay. Here, we investigate this interplay using a biophysically detailed neuron model of a reconstructed layer 2/3 pyramidal cell and spike timing-dependent plasticity (STDP). Specifically, we focus on the issue of how the efficacy of synapses contributed by different input streams are spatially represented in dendrites after STDP learning. We construct a simple feed forward network where a detailed model neuron receives synaptic inputs independently from multiple yet equally sized groups of afferent fibers with correlated activity, mimicking the spike activity from different neuronal populations encoding, for example, different sensory modalities. Interestingly, ensuing STDP learning, we observe that for all afferent groups, STDP leads to synaptic efficacies arranged into spatially segregated clusters effectively partitioning the dendritic tree. These segregated clusters possess a characteristic global organization in space, where they form a tessellation in which each group dominates mutually exclusive regions of the dendrite. Put simply, the dendritic imprint from different input streams left after STDP learning effectively forms what we term a “dendritic efficacy mosaic.” Furthermore, we show how variations of the inputs and STDP rule affect such an organization. Our model suggests that STDP may be an important mechanism for creating a clustered plasticity engram, which shapes how different input streams are spatially represented in dendrite.
STDP; dendrite; spatial patterning; mutual information index; dendritic efficacy mosaic
The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as both temporally and spatially localized. Under this localist account, neurons compute near-instantaneous mappings from their current input to their current output, brought about by somatic summation of dendritic contributions that are generated in functionally segregated compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought; notably that local dendritic activity may be a mechanism for generating on-going whole-cell voltage oscillations.
A central issue in biology is how local processes yield global consequences. This is especially relevant for neurons since these spatially extended cells process local synaptic inputs to generate global action potential output. The dendritic tree of a neuron, which receives most of the inputs, expresses ion channels that can generate nonlinear dynamics. A prominent phenomenon resulting from such ion channels are voltage oscillations. The distribution of the active membrane channels throughout the cell is often highly non-uniform. This can turn the dendritic tree into a network of sparsely spaced local oscillators. Here we analyze whether local dendritic oscillators can produce cell-wide voltage oscillations. Our mathematical theory shows that indeed even when the dendritic oscillators are weakly coupled, they lock their phases and give global oscillations. We show how the biophysical properties of the dendrites determine the global locking and how it can be controlled by synaptic inputs. As a consequence of global locking, even individual synaptic inputs can affect the timing of action potentials. In fact, dendrites locking in synchrony can lead to sustained firing of the cell. We show that dendritic trees can be bistable, with dendrites locking in either synchrony or asynchrony, which may provide a novel mechanism for single cell-based memory.
Since the discovery of place cells – single pyramidal neurons that encode spatial location – it has been hypothesized that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modeled using auto-associative networks, which utilize rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighboring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post-synaptic firing according to a spike-timing dependent plasticity (STDP) rule. Furthermore, electrophysiology studies have identified persistent “theta-coded” temporal correlations in place cell activity in vivo, characterized by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post-synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilizes this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.
STDP; hippocampus; spatial memory; synaptic plasticity; auto-associative network; phase precession; navigation
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
Spike-timing-dependent plasticity (STDP), a form of Hebbian plasticity, is inherently stabilizing. Whether and how GABAergic inhibition influences STDP is not well understood. Using a model neuron driven by converging inputs modifiable by STDP, we determined that a sufficient level of inhibition was critical to ensure that temporal coherence (correlation among presynaptic spike times) of synaptic inputs, rather than initial strength or number of inputs within a pathway, controlled postsynaptic spike timing. Inhibition exerted this effect by preferentially reducing synaptic efficacy, the ability of inputs to evoke postsynaptic action potentials, of the less coherent inputs. In visual cortical slices, inhibition potently reduced synaptic efficacy at ages during but not before the critical period of ocular dominance (OD) plasticity. Whole-cell recordings revealed that the amplitude of unitary IPSCs from parvalbumin positive (Pv+) interneurons to pyramidal neurons increased during the critical period, while the synaptic decay time-constant decreased. In addition, intrinsic properties of Pv+ interneurons matured, resulting in an increase in instantaneous firing rate. Our results suggest that maturation of inhibition in visual cortex ensures that the temporally coherent inputs (e.g. those from the open eye during monocular deprivation) control postsynaptic spike times of binocular neurons, a prerequisite for Hebbian mechanisms to induce OD plasticity.
Evidence suggests that maturation of inhibition is required for the development of plasticity to proceed in the visual cortex. However, the mechanisms by which increased inhibition promotes plasticity are not clear. Here we characterized the maturation of synaptic and intrinsic ionic properties of parvalbumin-positive interneurons, a prominent subtype of inhibitory neuron in the cortex. We used a simple integrate-and-fire model to simulate the influence of maturation of inhibition on associative plasticity rules. We simulated two input pathways that converged onto a single postsynaptic neuron. The temporal pattern of activity was constructed differently for the two pathways: one pathway represented visually-driven activity, while the other pathway represented sensory-deprived activity. In mature circuits it is established that postsynaptic cells can select for sensory-driven inputs over deprived inputs, even in the case that deprived inputs have an initial advantage in synaptic size or number. We demonstrated that maturation of inhibition was required for postsynaptic cells to appropriately select sensory-driven patterns of activity when challenged with an opponent pathway of greater size. These results outline a mechanism by which maturation of inhibition can promote plasticity in the young, a period of development that is characterized by heightened learning.
Reliable signal transmission constitutes a key requirement for neural circuit function. The propagation of synchronous pulse packets through recurrent circuits is hypothesized to be one robust form of signal transmission and has been extensively studied in computational and theoretical works. Yet, although external or internally generated oscillations are ubiquitous across neural systems, their influence on such signal propagation is unclear. Here we systematically investigate the impact of oscillations on propagating synchrony. We find that for standard, additive couplings and a net excitatory effect of oscillations, robust propagation of synchrony is enabled in less prominent feed-forward structures than in systems without oscillations. In the presence of non-additive coupling (as mediated by fast dendritic spikes), even balanced oscillatory inputs may enable robust propagation. Here, emerging resonances create complex locking patterns between oscillations and spike synchrony. Interestingly, these resonances make the circuits capable of selecting specific pathways for signal transmission. Oscillations may thus promote reliable transmission and, in co-action with dendritic nonlinearities, provide a mechanism for information processing by selectively gating and routing of signals. Our results are of particular interest for the interpretation of sharp wave/ripple complexes in the hippocampus, where previously learned spike patterns are replayed in conjunction with global high-frequency oscillations. We suggest that the oscillations may serve to stabilize the replay.
Rhythmic activity in the brain is ubiquitous, its functions are debated. Here we show that it may contribute to the reliable transmission of information within brain areas. We find that its effect is particularly strong if we take nonlinear coupling into account. This experimentally found neuronal property implies that inputs which arrive nearly simultaneously can have a much stronger impact than expected from the sum of their individuals strengths. In such systems, rhythmic activity supports information transmission even if its positive and negative part exactly cancels all the time. Further, the information transmission can adapt to the oscillation frequency to optimally benefit from it. Finally, we show that rhythms with different frequencies may enable or disable communication channels, and are thus suitable for the steering of information flow.
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as
a candidate for a learning rule that could explain how behaviorally relevant
adaptive changes in complex networks of spiking neurons could be achieved in a
self-organizing manner through local synaptic plasticity. However, the
capabilities and limitations of this learning rule could so far only be tested
through computer simulations. This article provides tools for an analytic
treatment of reward-modulated STDP, which allows us to predict under which
conditions reward-modulated STDP will achieve a desired learning effect. These
analytical results imply that neurons can learn through reward-modulated STDP to
classify not only spatial but also temporal firing patterns of presynaptic
neurons. They also can learn to respond to specific presynaptic firing patterns
with particular spike patterns. Finally, the resulting learning theory predicts
that even difficult credit-assignment problems, where it is very hard to tell
which synaptic weights should be modified in order to increase the global reward
for the system, can be solved in a self-organizing manner through
reward-modulated STDP. This yields an explanation for a fundamental experimental
result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys
were rewarded for increasing the firing rate of a particular neuron in the
cortex and were able to solve this extremely difficult credit assignment
problem. Our model for this experiment relies on a combination of
reward-modulated STDP with variable spontaneous firing activity. Hence it also
provides a possible functional explanation for trial-to-trial variability, which
is characteristic for cortical networks of neurons but has no analogue in
currently existing artificial computing systems. In addition our model
demonstrates that reward-modulated STDP can be applied to all synapses in a
large recurrent neural network without endangering the stability of the network
A major open problem in computational neuroscience is to explain how learning,
i.e., behaviorally relevant modifications in the central nervous system, can be
explained on the basis of experimental data on synaptic plasticity.
Spike-timing-dependent plasticity (STDP) is a rule for changes in the strength
of an individual synapse that is supported by experimental data from a variety
of species. However, it is not clear how this synaptic plasticity rule can
produce meaningful modifications in networks of neurons. Only if one takes into
account that consolidation of synaptic plasticity requires a third signal, such
as changes in the concentration of a neuromodulator (that might, for example, be
related to rewards or expected rewards), then meaningful changes in the
structure of networks of neurons may occur. We provide in this article an
analytical foundation for such reward-modulated versions of STDP that predicts
when this type of synaptic plasticity can produce functionally relevant changes
in networks of neurons. In particular we show that seemingly inexplicable
experimental data on biofeedback, where a monkey learnt to increase the firing
rate of an arbitrarily chosen neuron in the motor cortex, can be explained on
the basis of this new learning theory.
We show that the local spike timing-dependent plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDP's polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise synaptic weights that exceed a positive threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network driven by noise. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range functional loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.
STDP; microcircuitry; network; topology; neuromodulation; synfire; neocortex; striatum
Experimental studies have observed Long Term synaptic Potentiation (LTP) when a presynaptic neuron fires shortly before a postsynaptic neuron, and Long Term Depression (LTD) when the presynaptic neuron fires shortly after, a phenomenon known as Spike Timing Dependant Plasticity (STDP). When a neuron is presented successively with discrete volleys of input spikes STDP has been shown to learn ‘early spike patterns’, that is to concentrate synaptic weights on afferents that consistently fire early, with the result that the postsynaptic spike latency decreases, until it reaches a minimal and stable value. Here, we show that these results still stand in a continuous regime where afferents fire continuously with a constant population rate. As such, STDP is able to solve a very difficult computational problem: to localize a repeating spatio-temporal spike pattern embedded in equally dense ‘distractor’ spike trains. STDP thus enables some form of temporal coding, even in the absence of an explicit time reference. Given that the mechanism exposed here is simple and cheap it is hard to believe that the brain did not evolve to use it.
Spike timing dependent plasticity (STDP) has been observed experimentally in vitro and is a widely studied neural algorithm for synaptic modification. While the functional role of STDP has been investigated extensively, the effect of rhythms on the precise timing of STDP has not been characterized as well. We use a simplified biophysical model of a cortical network that generates pyramidal interneuronal gamma rhythms (PING). Plasticity via STDP is investigated at the excitatory pyramidal cell synapse from a gamma frequency (30–90 Hz) input independent of the network gamma rhythm. The input may represent a corticocortical or an information-specific thalamocortical connection. This synapse is mediated by N-methyl-D-aspartate receptor mediated (NMDAR) currents. For distinct network and input frequencies, the model shows robust frequency regimes of potentiation and depression, providing a mechanism by which responses to certain inputs can potentiate while responses to other inputs depress. For potentiating regimes, the model suggests an optimal amount and duration of plasticity that can occur, which depends on the time course for the decay of the postsynaptic NMDAR current. Prolonging the duration of the input beyond this optimal time results in depression. Inserting pauses in the input can increase the total potentiation. The optimal pause length corresponds to the decay time of the NMDAR current. Thus, STDP in this model provides a mechanism for potentiation and depression depending on input frequency and suggests that the slow NMDAR current decay helps to regulate the optimal amplitude and duration of the plasticity. The optimal pause length is comparable to the time scale of the negative phase of a modulatory theta rhythm, which may pause gamma rhythm spiking. Our pause results may suggest a novel role for this theta rhythm in plasticity. Finally, we discuss our results in the context of auditory thalamocortical plasticity.
Rhythms are well studied phenomena in many animal species. Brain rhythms in the gamma frequency range (30–90 Hz) are thought to play a role in attention and memory. In this paper, we are interested in how cortical gamma rhythms interact with information specific inputs that also have a significant gamma frequency component. The results from our computational model show that plasticity associated with learning depends on the specific frequencies of the input and cortical gamma rhythms. The results show a mechanism by which both increases and decreases in the strength of the input connection can occur, depending on the specific frequency of the input. A current mediated by NMDA receptors may be responsible for the temporal course of the plasticity seen in these brain regions. We discuss the implications of our results for conditioning paradigms applied to auditory learning.
Spike-timing-dependent plasticity (STDP) provides a cellular implementation of the Hebb postulate, which states that synapses, whose activity repeatedly drives action potential firing in target cells, are potentiated. At glutamatergic synapses onto hippocampal and neocortical pyramidal cells, synaptic activation followed by spike firing in the target cell causes long-term potentiation (LTP)—as predicted by Hebb—whereas excitatory postsynaptic potentials (EPSPs) evoked after a spike elicit long-term depression (LTD)—a phenomenon that was not specifically addressed by Hebb. In both instances the action potential in the postsynaptic target neuron is an instructive signal that is capable of supporting synaptic plasticity. STDP generally relies on the propagation of Na+ action potentials that are initiated in the axon hillhock back into the dendrite, where they cause depolarization and boost local calcium influx. However, recent studies in CA1 hippocampal pyramidal neurons have suggested that local calcium spikes might provide a more efficient trigger for LTP induction than backpropagating action potentials. Dendritic calcium spikes also play a role in an entirely different type of STDP that can be observed in cerebellar Purkinje cells. These neurons lack backpropagating Na+ spikes. Instead, plasticity at parallel fiber (PF) to Purkinje cell synapses depends on the relative timing of PF-EPSPs and activation of the glutamatergic climbing fiber (CF) input that causes dendritic calcium spikes. Thus, the instructive signal in this system is externalized. Importantly when EPSPs are elicited before CF activity, PF-LTD is induced rather than LTP. Thus, STDP in the cerebellum follows a timing rule that is opposite to its hippocampal/neocortical counterparts. Regardless, a common motif in plasticity is that LTD/LTP induction depends on the relative timing of synaptic activity and regenerative dendritic spikes which are driven by the instructive signal.
calcium; climbing fiber; dendrite; long-term depression; long-term potentiation; parallel fiber; Purkinje cell; pyramidal cell
A computationally rich algorithm of synaptic plasticity has been proposed based on the experimental observation that the sign and amplitude of the change in synaptic weight is dictated by the temporal order and temporal contiguity between pre- and postsynaptic activities. For more than a decade, this spike-timing-dependent plasticity (STDP) has been studied mainly in brain slices of different brain structures and cultured neurons. Although not yet compelling, evidences for the STDP rule in the intact brain, including primary sensory cortices, have been provided lastly. From insects to mammals, the presentation of precisely timed sensory inputs drives synaptic and functional plasticity in the intact central nervous system, with similar timing requirements than the in vitro defined STDP rule. The convergent evolution of this plasticity rule in species belonging to so distant phylogenic groups points to the efficiency of STDP, as a mechanism for modifying synaptic weights, as the basis of activity-dependent development, learning and memory. In spite of the ubiquity of STDP phenomena, a number of significant variations of the rule are observed in different structures, neuronal types and even synapses on the same neuron, as well as between in vitro and in vivo conditions. In addition, the state of the neuronal network, its ongoing activity and the activation of ascending neuromodulatory systems in different behavioral conditions have dramatic consequences on the expression of spike-timing-dependent synaptic plasticity, and should be further explored.
Hebb; STDP; in vivo; ongoing activity; synaptic plasticity; learning