Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips.
Spike synchronization is thought to have a constructive role for feature integration, attention, associative learning, and the formation of bidirectionally connected Hebbian cell assemblies. By contrast, theoretical studies on spike-timing-dependent plasticity (STDP) report an inherently decoupling influence of spike synchronization on synaptic connections of coactivated neurons. For example, bidirectional synaptic connections as found in cortical areas could be reproduced only by assuming realistic models of STDP and rate coding. We resolve this conflict by theoretical analysis and simulation of various simple and realistic STDP models that provide a more complete characterization of conditions when STDP leads to either coupling or decoupling of neurons firing in synchrony. In particular, we show that STDP consistently couples synchronized neurons if key model parameters are matched to physiological data: First, synaptic potentiation must be significantly stronger than synaptic depression for small (positive or negative) time lags between presynaptic and postsynaptic spikes. Second, spike synchronization must be sufficiently imprecise, for example, within a time window of 5–10 ms instead of 1 ms. Third, axonal propagation delays should not be much larger than dendritic delays. Under these assumptions synchronized neurons will be strongly coupled leading to a dominance of bidirectional synaptic connections even for simple STDP models and low mean firing rates at the level of spontaneous activity.
Hebbian cell assemblies; learning; memory; spike synchronization; STDP; synaptic connectivity; synaptic plasticity
We show that the local spike timing-dependent plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDP's polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise synaptic weights that exceed a positive threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network driven by noise. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range functional loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.
STDP; microcircuitry; network; topology; neuromodulation; synfire; neocortex; striatum
It has previously been shown that by using spike-timing-dependent plasticity (STDP), neurons can adapt to the beginning of a repeating spatio-temporal firing pattern in their input. In the present work, we demonstrate that this mechanism can be extended to train recognizers for longer spatio-temporal input signals. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feed-forwardly connected in such a way that both the correct input segment and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. We show that nearest-neighbor STDP (where only the pre-synaptic spike most recent to a post-synaptic one is considered) leads to “nearest-neighbor” chains where connections only form between subsequent states in a chain (similar to classic “synfire chains”). In contrast, “all-to-all spike-timing-dependent plasticity” (where all pre- and post-synaptic spike pairs matter) leads to multiple connections that can span several temporal stages in the chain; these connections respect the temporal order of the neurons. It is also demonstrated that previously learnt individual chains can be “stitched together” by repeatedly presenting them in a fixed order. This way longer sequence recognizers can be formed, and potentially also nested structures. Robustness of recognition with respect to speed variations in the input patterns is shown to depend on rise-times of post-synaptic potentials and the membrane noise. It is argued that the memory capacity of the model is high, but could theoretically be increased using sparse codes.
sequence learning; synfire chains; spiking neurons; spike-timing-dependent plasticity; neural automata
We review biophysical models of synaptic plasticity, with a focus on spike-timing dependent plasticity (STDP). The common property of the discussed models is that synaptic changes depend on the dynamics of the intracellular calcium concentration, which itself depends on pre- and postsynaptic activity. We start by discussing simple models in which plasticity changes are based directly on calcium amplitude and dynamics. We then consider models in which dynamic intracellular signaling cascades form the link between the calcium dynamics and the plasticity changes. Both mechanisms of induction of STDP (through the ability of pre/postsynaptic spikes to evoke changes in the state of the synapse) and of maintenance of the evoked changes (through bistability) are discussed.
STDP; biophysical models; bistability; induction; maintenance; protein signaling cascade; calcium control hypothesis; CaMKII
Spike-timing-dependent synaptic plasticity (STDP) is a simple and effective learning rule for sequence learning. However, synapses being subject to STDP rules are readily influenced in noisy circumstances because synaptic conductances are modified by pre- and postsynaptic spikes elicited within a few tens of milliseconds, regardless of whether those spikes convey information or not. Noisy firing existing everywhere in the brain may induce irrelevant enhancement of synaptic connections through STDP rules and would result in uncertain memory encoding and obscure memory patterns. We will here show that the LTD windows of the STDP rules enable robust sequence learning amid background noise in cooperation with a large signal transmission delay between neurons and a theta rhythm, using a network model of the entorhinal cortex layer II with entorhinal-hippocampal loop connections. The important element of the present model for robust sequence learning amid background noise is the symmetric STDP rule having LTD windows on both sides of the LTP window, in addition to the loop connections having a large signal transmission delay and the theta rhythm pacing activities of stellate cells. Above all, the LTD window in the range of positive spike-timing is important to prevent influences of noise with the progress of sequence learning.
STDP; LTD window; Noise; Sequence learning; Entorhinal cortex; Entorhinal-hippocampal loop circuitry; Large transmission delay; Theta rhythm
STDP (spike-timing-dependent synaptic plasticity) is thought to be a synaptic learning rule that embeds spike-timing information into a specific pattern of synaptic strengths in neuronal circuits, resulting in a memory. STDP consists of bidirectional long-term changes in synaptic strengths. This process includes long-term potentiation and long-term depression, which are dependent on the timing of presynaptic and postsynaptic spikings. In this review, we focus on computational aspects of signaling mechanisms that induce and maintain STDP as a key step toward the definition of a general synaptic learning rule. In addition, we discuss the temporal and spatial aspects of STDP, and the requirement of a homeostatic mechanism of STDP in vivo.
Reward-modulated spike timing dependent plasticity (STDP) combines unsupervised STDP with a reinforcement signal that modulates synaptic changes. It was proposed as a learning rule capable of solving the distal reward problem in reinforcement learning. Nonetheless, performance and limitations of this learning mechanism have yet to be tested for its ability to solve biological problems. In our work, rewarded STDP was implemented to model foraging behavior in a simulated environment. Over the course of training the network of spiking neurons developed the capability of producing highly successful decision-making. The network performance remained stable even after significant perturbations of synaptic structure. Rewarded STDP alone was insufficient to learn effective decision making due to the difficulty maintaining homeostatic equilibrium of synaptic weights and the development of local performance maxima. Our study predicts that successful learning requires stabilizing mechanisms that allow neurons to balance their input and output synapses as well as synaptic noise.
Single spikes and their timing matter in changing synaptic efficacy, which is known as spike-timing-dependent plasticity (STDP). Most previous studies treated spikes as all-or-none events, and considered their duration and magnitude as negligible. Here we explore the effects of action potential (AP) duration on synaptic plasticity in a simplified model neuron using computer simulations. We propose a novel STDP model that depresses synapses using an AP duration dependent LTD window and induces potentiation of synaptic strength when presynaptic spikes arrive before and during a postsynaptic AP (dSTDP). We demonstrate that AP duration is another key factor for insensitizing the postsynaptic neural firing and for controlling the shape of synaptic weight distribution. Extended AP durations produce a wide unimodal weight distribution that resembles the ones reported experimentally and make the postsynaptic neuron tranquil when disturbed by poisson noise spike trains, while equivalently sensitive to the synchronized. Our results suggest that the impact of AP duration, modeled here as an AP-dependent STDP window, on synaptic plasticity can be dramatic and should motivate future STDP studies.
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.
The complex circuitry in the cerebral cortex is characterized by bottom-up connections, which carry feedforward information from the sensory periphery to higher areas, and top-down connections, where the information flow is reversed. Changes over time in the strength of synaptic connections between neurons underlie development, learning and memory. A fundamental mechanism to change synaptic strength is spike timing dependent plasticity, whereby synapses are strengthened whenever pre-synaptic spikes shortly precede post-synaptic spikes and are weakened otherwise; the relative timing of spikes therefore dictates the direction of plasticity. Spike timing dependent plasticity has been observed in multiple species and different brain areas. Here, we argue that top-down connections obey a learning rule with a reversed temporal dependence, which we call reverse spike timing dependent plasticity. We use mathematical analysis and computational simulations to show that this reverse time learning rule, and not previous learning rules, leads to a biologically plausible connectivity pattern with stable synaptic strengths. This reverse time learning rule is supported by recent neuroanatomical and neurophysiological experiments and can explain empirical observations about the development and function of top-down synapses in the brain.
A computationally rich algorithm of synaptic plasticity has been proposed based on the experimental observation that the sign and amplitude of the change in synaptic weight is dictated by the temporal order and temporal contiguity between pre- and postsynaptic activities. For more than a decade, this spike-timing-dependent plasticity (STDP) has been studied mainly in brain slices of different brain structures and cultured neurons. Although not yet compelling, evidences for the STDP rule in the intact brain, including primary sensory cortices, have been provided lastly. From insects to mammals, the presentation of precisely timed sensory inputs drives synaptic and functional plasticity in the intact central nervous system, with similar timing requirements than the in vitro defined STDP rule. The convergent evolution of this plasticity rule in species belonging to so distant phylogenic groups points to the efficiency of STDP, as a mechanism for modifying synaptic weights, as the basis of activity-dependent development, learning and memory. In spite of the ubiquity of STDP phenomena, a number of significant variations of the rule are observed in different structures, neuronal types and even synapses on the same neuron, as well as between in vitro and in vivo conditions. In addition, the state of the neuronal network, its ongoing activity and the activation of ascending neuromodulatory systems in different behavioral conditions have dramatic consequences on the expression of spike-timing-dependent synaptic plasticity, and should be further explored.
Hebb; STDP; in vivo; ongoing activity; synaptic plasticity; learning
Spike-timing-dependent plasticity (STDP) has attracted considerable experimental and theoretical attention over the last decade. In the most basic formulation, STDP provides a fundamental unit – a spike pair – for quantifying the induction of long-term changes in synaptic strength. However, many factors, both pre- and postsynaptic, can affect synaptic transmission and integration, especially when multiple spikes are considered. Here we review the experimental evidence for multiple types of nonlinear temporal interactions in STDP, focusing on the contributions of individual spike pairs, overall spike rate, and precise spike timing for modification of cortical and hippocampal excitatory synapses. We discuss the underlying processes that determine the specific learning rules at different synapses, such as postsynaptic excitability and short-term depression. Finally, we describe the success of efforts toward building predictive, quantitative models of how complex and natural spike trains induce long-term synaptic modifications.
cortex; hippocampus; LTD; LTP; model; spikes; STDP; synaptic plasticity
Spike-timing dependent plasticity (STDP) is a biologically constrained unsupervised form of learning that potentiates or depresses synaptic connections based on the precise timing of pre-synaptic and post-synaptic firings. The effects of on-going STDP on the topology of evolving model neural networks were assessed in 50 unique simulations which modeled 2 h of activity. After a period of stabilization, a number of global and local topological features were monitored periodically to quantify on-going changes in network structure. Global topological features included the total number of remaining synapses, average synaptic strengths, and average number of synapses per neuron (degree). Under a range of different input regimes and initial network configurations, each network maintained a robust and highly stable global structure across time. Local topology was monitored by assessing state changes of all three-neuron subgraphs (triads) present in the networks. Overall counts and the range of triad configurations varied little across the simulations; however, a substantial set of individual triads continued to undergo rapid state changes and revealed a dynamic local topology. In addition, specific small-world properties also fluctuated across time. These findings suggest that on-going STDP provides an efficient means of selecting and maintaining a stable yet flexible network organization.
spike-timing dependent plasticity (STDP); motif; topology; small-world; neural network; graph theory; simulation
Spike timing-dependent plasticity (STDP) is a computationally powerful form of plasticity in which synapses are strengthened or weakened according to the temporal order and precise millisecond-scale delay between presynaptic and postsynaptic spiking activity. STDP is readily observed in vitro, but evidence for STDP in vivo is scarce. Here, we studied spike timing-dependent synaptic depression in single putative pyramidal neurons of the rat primary somatosensory cortex (S1) in vivo, using two techniques. First, we recorded extracellularly from layer 2/3 (L2/3) and L5 neurons, and paired spontaneous action potentials (postsynaptic spikes) with subsequent subthreshold deflection of one whisker (to drive presynaptic afferents to the recorded neuron) to produce “post-leading-pre” spike pairings at known delays. Short delay pairings (<17 ms) resulted in a significant decrease of the extracellular spiking response specific to the paired whisker, consistent with spike timing-dependent synaptic depression. Second, in whole-cell recordings from neurons in L2/3, we paired postsynaptic spikes elicited by direct-current injection with subthreshold whisker deflection to drive presynaptic afferents to the recorded neuron at precise temporal delays. Post-leading-pre pairing (<33 ms delay) decreased the slope and amplitude of the PSP evoked by the paired whisker, whereas “pre-leading-post” delays failed to produce depression, and sometimes produced potentiation of whisker-evoked PSPs. These results demonstrate that spike timing-dependent synaptic depression occurs in S1 in vivo, and is therefore a plausible plasticity mechanism in the sensory cortex.
spike-timing dependent plasticity; STDP; somatosensory cortex; plasticity; rat; synaptic depression; LTP; LTD; barrel
Spike-timing-dependent plasticity (STDP), a form of Hebbian plasticity, is inherently stabilizing. Whether and how GABAergic inhibition influences STDP is not well understood. Using a model neuron driven by converging inputs modifiable by STDP, we determined that a sufficient level of inhibition was critical to ensure that temporal coherence (correlation among presynaptic spike times) of synaptic inputs, rather than initial strength or number of inputs within a pathway, controlled postsynaptic spike timing. Inhibition exerted this effect by preferentially reducing synaptic efficacy, the ability of inputs to evoke postsynaptic action potentials, of the less coherent inputs. In visual cortical slices, inhibition potently reduced synaptic efficacy at ages during but not before the critical period of ocular dominance (OD) plasticity. Whole-cell recordings revealed that the amplitude of unitary IPSCs from parvalbumin positive (Pv+) interneurons to pyramidal neurons increased during the critical period, while the synaptic decay time-constant decreased. In addition, intrinsic properties of Pv+ interneurons matured, resulting in an increase in instantaneous firing rate. Our results suggest that maturation of inhibition in visual cortex ensures that the temporally coherent inputs (e.g. those from the open eye during monocular deprivation) control postsynaptic spike times of binocular neurons, a prerequisite for Hebbian mechanisms to induce OD plasticity.
Evidence suggests that maturation of inhibition is required for the development of plasticity to proceed in the visual cortex. However, the mechanisms by which increased inhibition promotes plasticity are not clear. Here we characterized the maturation of synaptic and intrinsic ionic properties of parvalbumin-positive interneurons, a prominent subtype of inhibitory neuron in the cortex. We used a simple integrate-and-fire model to simulate the influence of maturation of inhibition on associative plasticity rules. We simulated two input pathways that converged onto a single postsynaptic neuron. The temporal pattern of activity was constructed differently for the two pathways: one pathway represented visually-driven activity, while the other pathway represented sensory-deprived activity. In mature circuits it is established that postsynaptic cells can select for sensory-driven inputs over deprived inputs, even in the case that deprived inputs have an initial advantage in synaptic size or number. We demonstrated that maturation of inhibition was required for postsynaptic cells to appropriately select sensory-driven patterns of activity when challenged with an opponent pathway of greater size. These results outline a mechanism by which maturation of inhibition can promote plasticity in the young, a period of development that is characterized by heightened learning.
In this paper we review several ways of realizing asynchronous Spike-Timing-Dependent-Plasticity (STDP) using memristors as synapses. Our focus is on how to use individual memristors to implement synaptic weight multiplications, in a way such that it is not necessary to (a) introduce global synchronization and (b) to separate memristor learning phases from memristor performing phases. In the approaches described, neurons fire spikes asynchronously when they wish and memristive synapses perform computation and learn at their own pace, as it happens in biological neural systems. We distinguish between two different memristor physics, depending on whether they respond to the original “moving wall” or to the “filament creation and annihilation” models. Independent of the memristor physics, we discuss two different types of STDP rules that can be implemented with memristors: either the pure timing-based rule that takes into account the arrival time of the spikes from the pre- and the post-synaptic neurons, or a hybrid rule that takes into account only the timing of pre-synaptic spikes and the membrane potential and other state variables of the post-synaptic neuron. We show how to implement these rules in cross-bar architectures that comprise massive arrays of memristors, and we discuss applications for artificial vision.
memristor/cmos; artificial-learning-synapses; spike-timing-dependent-plasticity; spiking-neural-networks
Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.
Spike-timing-dependent plasticity (STDP) has been well established between excitatory neurons and several computational functions have been proposed in various neural systems. Despite some recent efforts, however, there is a significant lack of functional understanding of inhibitory STDP (iSTDP) and its interplay with excitatory STDP (eSTDP). Here, we demonstrate by analytical and numerical methods that iSTDP contributes crucially to the balance of excitatory and inhibitory weights for the selection of a specific signaling pathway among other pathways in a feedforward circuit. This pathway selection is based on the high sensitivity of STDP to correlations in spike times, which complements a recent proposal for the role of iSTDP in firing-rate based selection. Our model predicts that asymmetric anti-Hebbian iSTDP exceeds asymmetric Hebbian iSTDP for supporting pathway-specific balance, which we show is useful for propagating transient neuronal responses. Furthermore, we demonstrate how STDPs at excitatory–excitatory, excitatory–inhibitory, and inhibitory–excitatory synapses cooperate to improve the pathway selection. We propose that iSTDP is crucial for shaping the network structure that achieves efficient processing of synchronous spikes.
STDP; spike-timing; plasticity; inhibition; disynaptic; correlation; excitation–inhibition balance
Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse – the “first law” of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.
STDP; synaptic plasticity; mechanistic models; calcium; learning rules; long-term depression; long-term potentiation
Spike timing-dependent plasticity (STDP) has been proposed as a mechanism for optimizing the tuning of neurons to sensory inputs, a process that underlies the formation of receptive field properties and associative memories. The properties of STDP must adjust during development to enable neurons to optimally tune their selectivity for environmental stimuli, but these changes are poorly understood. Here we review the properties of STDP and how these may change during development in primary sensory cortical layers 2/3 and 4, initial sites for intracortical processing. We provide a primer discussing postnatal developmental changes in synaptic proteins and neuromodulators that are thought to influence STDP induction and expression. We propose that STDP is shaped by, but also modifies, synapses to produce refinements in neuronal responses to sensory inputs.
spike timing-dependent plasticity; presynaptic NMDA receptor; endocannabinoid; visual cortex; auditory cortex; somatosensory cortex; neuromodulation
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
How do neurons learn to extract information from their inputs, and perform meaningful computations? Neurons receive inputs as continuous streams of action potentials or “spikes” that arrive at thousands of synapses. The strength of these synapses - the synaptic weight - undergoes constant modification. It has been demonstrated in numerous experiments that this modification depends on the temporal order of spikes in the pre- and postsynaptic neuron, a rule known as STDP, but it has remained unclear, how this contributes to higher level functions in neural network architectures. In this paper we show that STDP induces in a commonly found connectivity motif in the cortex - a winner-take-all (WTA) network - autonomous, self-organized learning of probabilistic models of the input. The resulting function of the neural circuit is Bayesian computation on the input spike trains. Such unsupervised learning has previously been studied extensively on an abstract, algorithmical level. We show that STDP approximates one of the most powerful learning methods in machine learning, Expectation-Maximization (EM). In a series of computer simulations we demonstrate that this enables STDP in WTA circuits to solve complex learning tasks, reaching a performance level that surpasses previous uses of spiking neural networks.
Spike timing-dependent plasticity (STDP) is a cellular model of Hebbian synaptic plasticity which is believed to underlie memory formation. In an attempt to establish a STDP paradigm in CA1 of acute hippocampal slices from juvenile rats (P15–20), we found that changes in excitability resulting from different slice preparation protocols correlate with the success of STDP induction. Slice preparation with sucrose containing ACSF prolonged rise time, reduced frequency adaptation, and decreased latency of action potentials in CA1 pyramidal neurons compared to preparation in conventional ASCF, while other basal electrophysiological parameters remained unaffected. Whereas we observed prominent timing-dependent long-term potentiation (t-LTP) to 171 ± 10% of controls in conventional ACSF, STDP was absent in sucrose prepared slices. This sucrose-induced STDP deficit could not be rescued by stronger STDP paradigms, applying either more pre- and/or postsynaptic stimuli, or by a higher stimulation frequency. Importantly, slice preparation with sucrose containing ACSF did not eliminate theta-burst stimulation induced LTP in CA1 in field potential recordings in our rat hippocampal slices. Application of dopamine (for 10–20 min) to sucrose prepared slices completely rescued t-LTP and recovered action potential properties back to levels observed in ACSF prepared slices. Conversely, acute inhibition of D1 receptor signaling impaired t-LTP in ACSF prepared slices. No similar restoring effect for STDP as seen with dopamine was observed in response to the β-adrenergic agonist isoproterenol. ELISA measurements demonstrated a significant reduction of endogenous dopamine levels (to 61.9 ± 6.9% of ACSF values) in sucrose prepared slices. These results suggest that dopamine signaling is involved in regulating the efficiency to elicit STDP in CA1 pyramidal neurons.
synaptic plasticity; dopamine; isoproterenol; rat; hippocampal slice
A phenomenological model of synaptic plasticity is able to account for a large body of experimental data on spike-timing-dependent plasticity (STDP). The basic ingredient of the model is the correlation of presynaptic spike arrival with postsynaptic voltage. The local membrane voltage is used twice: a first term accounts for the instantaneous voltage and the second one for a low-pass filtered voltage trace. Spike-timing effects emerge as a special case. We hypothesize that the voltage dependence can explain differential effects of STDP in dendrites, since the amplitude and time course of backpropagating action potentials or dendritic spikes influences the plasticity results in the model. The dendritic effects are simulated by variable choices of voltage time course at the site of the synapse, i.e., without an explicit model of the spatial structure of the neuron.
synaptic plasticity; computational neuroscience; STDP; LTP; LTD; voltage; model; frequency
Spike-timing-dependent plasticity (STDP) offers a powerful means of forming and modifying neural circuits. Experimental and theoretical studies have demonstrated its potential usefulness for functions as varied as cortical map development, sharpening of sensory receptive fields, working memory, and associative learning. Even so, it is unlikely that STDP works alone. Unless changes in synaptic strength are coordinated across multiple synapses and with other neuronal properties, it is difficult to maintain the stability and functionality of neural circuits. Moreover, there are certain features of early postnatal development (e.g., rapid changes in sensory input) that threaten neural circuit stability in ways that STDP may not be well placed to counter. These considerations have led researchers to investigate additional types of plasticity, complementary to STDP, that may serve to constrain synaptic weights and/or neuronal firing. These are collectively known as “homeostatic plasticity” and include schemes that control the total synaptic strength of a neuron, that modulate its intrinsic excitability as a function of average activity, or that make the ability of synapses to undergo Hebbian modification depend upon their history of use. In this article, we will review the experimental evidence for homeostatic forms of plasticity and consider how they might interact with STDP during development, and learning and memory.
homeostatic plasticity; synaptic scaling; intrinsic plasticity; STDP; BCM; LTP; LTD; stability