In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks – the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2–150 Hz), firing rates of the inputs change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.
spike-timing dependent plasticity; oscillations; phase-locking; stable learning; stability of neuronal plasticity; place fields
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Spike-timing dependent plasticity (STDP), a widespread synaptic modification mechanism, is sensitive to correlations between presynaptic spike trains and it generates competition among synapses. However, STDP has an inherent instability because strong synapses are more likely to be strengthened than weak ones, causing them to grow in strength until some biophysical limit is reached. Through simulations and analytic calculations, we show that a small temporal shift in the STDP window that causes synchronous, or nearly synchronous, pre- and postsynaptic action potentials to induce long-term depression can stabilize synaptic strengths. Shifted STDP also stabilizes the postsynaptic firing rate and can implement both Hebbian and anti-Hebbian forms of competitive synaptic plasticity. Interestingly, the overall level of inhibition determines whether plasticity is Hebbian or anti-Hebbian. Even a random symmetric jitter of a few milliseconds in the STDP window can stabilize synaptic strengths while retaining these features. The same results hold for a shifted version of the more recent “triplet” model of STDP. Our results indicate that the detailed shape of the STDP window function near the transition from depression to potentiation is of the utmost importance in determining the consequences of STDP, suggesting that this region warrants further experimental study.
Synaptic plasticity is believed to be a fundamental mechanism of learning and memory. In spike-timing dependent synaptic plasticity (STDP), the temporal order of pre- and postsynaptic spiking across a synapse determines whether it is strengthened or weakened. STDP can induce competition between the different inputs synapsing onto a neuron, which is crucial for the formation of functional neuronal circuits. However, strong synaptic competition is often incompatible with inherent synaptic stability. Synaptic modification by STDP is controlled by a so-called temporal window function that determines how synaptic modification depends on spike timing. We show that a small shift, or random jitter, in the conventional temporal window function used for STDP that is compatible with the underlying molecular kinetics of STDP, can both stabilize synapses and maintain competition. The outcome of the competition is determined by the level of inhibitory input to the postsynaptic neuron. We conclude that the detailed shape of the temporal window function is critical in determining the functional consequences of STDP and thus deserves further experimental study.
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.
The complex circuitry in the cerebral cortex is characterized by bottom-up connections, which carry feedforward information from the sensory periphery to higher areas, and top-down connections, where the information flow is reversed. Changes over time in the strength of synaptic connections between neurons underlie development, learning and memory. A fundamental mechanism to change synaptic strength is spike timing dependent plasticity, whereby synapses are strengthened whenever pre-synaptic spikes shortly precede post-synaptic spikes and are weakened otherwise; the relative timing of spikes therefore dictates the direction of plasticity. Spike timing dependent plasticity has been observed in multiple species and different brain areas. Here, we argue that top-down connections obey a learning rule with a reversed temporal dependence, which we call reverse spike timing dependent plasticity. We use mathematical analysis and computational simulations to show that this reverse time learning rule, and not previous learning rules, leads to a biologically plausible connectivity pattern with stable synaptic strengths. This reverse time learning rule is supported by recent neuroanatomical and neurophysiological experiments and can explain empirical observations about the development and function of top-down synapses in the brain.
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as
a candidate for a learning rule that could explain how behaviorally relevant
adaptive changes in complex networks of spiking neurons could be achieved in a
self-organizing manner through local synaptic plasticity. However, the
capabilities and limitations of this learning rule could so far only be tested
through computer simulations. This article provides tools for an analytic
treatment of reward-modulated STDP, which allows us to predict under which
conditions reward-modulated STDP will achieve a desired learning effect. These
analytical results imply that neurons can learn through reward-modulated STDP to
classify not only spatial but also temporal firing patterns of presynaptic
neurons. They also can learn to respond to specific presynaptic firing patterns
with particular spike patterns. Finally, the resulting learning theory predicts
that even difficult credit-assignment problems, where it is very hard to tell
which synaptic weights should be modified in order to increase the global reward
for the system, can be solved in a self-organizing manner through
reward-modulated STDP. This yields an explanation for a fundamental experimental
result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys
were rewarded for increasing the firing rate of a particular neuron in the
cortex and were able to solve this extremely difficult credit assignment
problem. Our model for this experiment relies on a combination of
reward-modulated STDP with variable spontaneous firing activity. Hence it also
provides a possible functional explanation for trial-to-trial variability, which
is characteristic for cortical networks of neurons but has no analogue in
currently existing artificial computing systems. In addition our model
demonstrates that reward-modulated STDP can be applied to all synapses in a
large recurrent neural network without endangering the stability of the network
A major open problem in computational neuroscience is to explain how learning,
i.e., behaviorally relevant modifications in the central nervous system, can be
explained on the basis of experimental data on synaptic plasticity.
Spike-timing-dependent plasticity (STDP) is a rule for changes in the strength
of an individual synapse that is supported by experimental data from a variety
of species. However, it is not clear how this synaptic plasticity rule can
produce meaningful modifications in networks of neurons. Only if one takes into
account that consolidation of synaptic plasticity requires a third signal, such
as changes in the concentration of a neuromodulator (that might, for example, be
related to rewards or expected rewards), then meaningful changes in the
structure of networks of neurons may occur. We provide in this article an
analytical foundation for such reward-modulated versions of STDP that predicts
when this type of synaptic plasticity can produce functionally relevant changes
in networks of neurons. In particular we show that seemingly inexplicable
experimental data on biofeedback, where a monkey learnt to increase the firing
rate of an arbitrarily chosen neuron in the motor cortex, can be explained on
the basis of this new learning theory.
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
In vivo neural responses to stimuli are known to have a lot of variability across trials. If the same number of spikes is emitted from trial to trial, the neuron is said to be reliable. If the timing of such spikes is roughly preserved across trials, the neuron is said to be precise. Here we demonstrate both analytically and numerically that the well-established Hebbian learning rule of spike-timing-dependent plasticity (STDP) can learn response patterns despite relatively low reliability (Poisson-like variability) and low temporal precision (10–20 ms). These features are in line with many experimental observations, in which a poststimulus time histogram (PSTH) is evaluated over multiple trials. In our model, however, information is extracted from the relative spike times between afferents without the need of an absolute reference time, such as a stimulus onset. Relevantly, recent experiments show that relative timing is often more informative than the absolute timing. Furthermore, the scope of application for our study is not restricted to sensory systems. Taken together, our results suggest a fine temporal resolution for the neural code, and that STDP is an appropriate candidate for encoding and decoding such activity.
Spike timing-dependent plasticity (STDP) modifies synaptic strengths based on timing information available locally at each synapse. Despite this, it induces global structures within a recurrently connected network. We study such structures both through simulations and by analyzing the effects of STDP on pair-wise interactions of neurons. We show how conventional STDP acts as a loop-eliminating mechanism and organizes neurons into in- and out-hubs. Loop-elimination increases when depression dominates and turns into loop-generation when potentiation dominates. STDP with a shifted temporal window such that coincident spikes cause depression enhances recurrent connections and functions as a strict buffering mechanism that maintains a roughly constant average firing rate. STDP with the opposite temporal shift functions as a loop eliminator at low rates and as a potent loop generator at higher rates. In general, studying pairwise interactions of neurons provides important insights about the structures that STDP can produce in large networks.
The connectivity structure in neural networks reflects, at least in part, the long-term effects of synaptic plasticity mechanisms that underlie learning and memory. In one of the most widespread such mechanisms, spike-timing dependent plasticity (STDP), the temporal order of pre- and postsynaptic spiking across a synapse determines whether it is strengthened or weakened. Therefore, the synapses are modified solely based on local information through STDP. However, STDP can give rise to a variety of global connectivity structures in an interconnected neural network. Here, we provide an analytical framework that can predict the global structures that arise from STDP in such a network. The analytical technique we develop is actually quite simple, and involves the study of two interconnected neurons receiving inputs from their surrounding network. Following analytical calculations for a variety of different STDP models, we test and verify all our predictions through full network simulations. More importantly, the developed analytical tool will allow other researchers to figure out what arises from any other type of STDP in a network.
Spike-Timing Dependent Plasticity (STDP) is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel – the “temporally asymmetric Hebbian” learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.
In natural environments, although sensory inputs are often highly mixed with one another and obscured by noise, animals can detect and learn discrete signals from this mixture. For example, humans easily detect the mention of their names from across a noisy room, a phenomenon known as the cocktail party effect. Spike-timing-dependent plasticity (STDP) is a learning mechanism ubiquitously observed in the brain across various species and is considered to be the neural basis of such learning; however, it is still unclear how STDP enables efficient learning from uncertain stimuli and whether spike-based learning offers benefits beyond those provided by standard machine learning methods for signal decomposition. To begin to answer these questions, we conducted analytical and simulation studies examining the propagation of spike correlation in feedback neural circuits. We show that non-precise spike correlation is useful for handling noise during the learning process. Our results also suggest that neural circuits make use of stochastic membrane dynamics to approximate computationally complex Bayesian learning algorithms, progressing our understanding of the principles of stochastic computation by the brain.
Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this “temporal stability” or “slowness” approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing–dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the “trace rule.” The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.
Neurons interact by exchanging information via small connection sites, so-called synapses. Interestingly, the efficiency of synapses in transmitting neuronal signals is not static, but changes dynamically depending on the signals that the associated neurons emit. As neurons receive thousands of synaptic input signals, they can thus “choose” the input signals they are interested in by adjusting their synapses accordingly. This adaptation mechanism, known as synaptic plasticity, has long been hypothesized to form the neuronal correlate of learning. It raises a difficult question: what aspects of the input signals are the neurons interested in, given that the adaptation of the synapses follows a certain mechanistic rule? We address this question for spike-timing–dependent plasticity, a type of synaptic plasticity that has raised a lot of interest in the last decade. We show that under certain assumptions regarding neuronal information transmission, spike-timing–dependent plasticity focuses on aspects of the input signals that vary slowly in time. This relates spike-timing–dependent plasticity to a class of abstract learning rules that were previously proposed as a means of learning to recognize objects in spite of contextual changes such as size or position. Based on this link, we propose a novel functional interpretation of spike-timing–dependent plasticity.
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
How do neurons learn to extract information from their inputs, and perform meaningful computations? Neurons receive inputs as continuous streams of action potentials or “spikes” that arrive at thousands of synapses. The strength of these synapses - the synaptic weight - undergoes constant modification. It has been demonstrated in numerous experiments that this modification depends on the temporal order of spikes in the pre- and postsynaptic neuron, a rule known as STDP, but it has remained unclear, how this contributes to higher level functions in neural network architectures. In this paper we show that STDP induces in a commonly found connectivity motif in the cortex - a winner-take-all (WTA) network - autonomous, self-organized learning of probabilistic models of the input. The resulting function of the neural circuit is Bayesian computation on the input spike trains. Such unsupervised learning has previously been studied extensively on an abstract, algorithmical level. We show that STDP approximates one of the most powerful learning methods in machine learning, Expectation-Maximization (EM). In a series of computer simulations we demonstrate that this enables STDP in WTA circuits to solve complex learning tasks, reaching a performance level that surpasses previous uses of spiking neural networks.
Adaptive sensory processing influences the central nervous system's interpretation of incoming sensory information. One of the functions of this adaptive sensory processing is to allow the nervous system to ignore predictable sensory information so that it may focus on important novel information needed to improve performance of specific tasks. The mechanism of spike-timing-dependent plasticity (STDP) has proven to be intriguing in this context because of its dual role in long-term memory and ongoing adaptation to maintain optimal tuning of neural responses. Some of the clearest links between STDP and adaptive sensory processing have come from in vitro, in vivo, and modeling studies of the electrosensory systems of weakly electric fish. Plasticity in these systems is anti-Hebbian, so that presynaptic inputs that repeatedly precede, and possibly could contribute to, a postsynaptic neuron's firing are weakened. The learning dynamics of anti-Hebbian STDP learning rules are stable if the timing relations obey strict constraints. The stability of these learning rules leads to clear predictions of how functional consequences can arise from the detailed structure of the plasticity. Here we review the connection between theoretical predictions and functional consequences of anti-Hebbian STDP, focusing on adaptive processing in the electrosensory system of weakly electric fish. After introducing electrosensory adaptive processing and the dynamics of anti-Hebbian STDP learning rules, we address issues of predictive sensory cancelation and novelty detection, descending control of plasticity, synaptic scaling, and optimal sensory tuning. We conclude with examples in other systems where these principles may apply.
electrosensory; mormyrid; learning dynamics; stability; descending control; stochastic
Spike timing-dependent plasticity (STDP) is considered as an ubiquitous rule for associative plasticity in cortical networks in vitro. However, limited supporting evidence for its functional role has been provided in vivo. In particular, there are very few studies demonstrating the co-occurrence of synaptic efficiency changes and alteration of sensory responses in adult cortex during Hebbian or STDP protocols. We addressed this issue by reviewing and comparing the functional effects of two types of cellular conditioning in cat visual cortex. The first one, referred to as the “covariance” protocol, obeys a generalized Hebbian framework, by imposing, for different stimuli, supervised positive and negative changes in covariance between postsynaptic and presynaptic activity rates. The second protocol, based on intracellular recordings, replicated in vivo variants of the theta-burst paradigm (TBS), proven successful in inducing long-term potentiation in vitro. Since it was shown to impose a precise correlation delay between the electrically activated thalamic input and the TBS-induced postsynaptic spike, this protocol can be seen as a probe of causal (“pre-before-post”) STDP. By choosing a thalamic region where the visual field representation was in retinotopic overlap with the intracellularly recorded cortical receptive field as the afferent site for supervised electrical stimulation, this protocol allowed to look for possible correlates between STDP and functional reorganization of the conditioned cortical receptive field. The rate-based “covariance protocol” induced significant and large amplitude changes in receptive field properties, in both kitten and adult V1 cortex. The TBS STDP-like protocol produced in the adult significant changes in the synaptic gain of the electrically activated thalamic pathway, but the statistical significance of the functional correlates was detectable mostly at the population level. Comparison of our observations with the literature leads us to re-examine the experimental status of spike timing-dependent potentiation in adult cortex. We propose the existence of a correlation-based threshold in vivo, limiting the expression of STDP-induced changes outside the critical period, and which accounts for the stability of synaptic weights during sensory cortical processing in the absence of attention or reward-gated supervision.
Hebb; intracellular; correlation; potentiation; depression; receptive field; V1; adult plasticity
Spike-frequency adaptation is known to enhance the transmission of information in sensory spiking neurons by rescaling the dynamic range for input processing, matching it to the temporal statistics of the sensory stimulus. Achieving maximal information transmission has also been recently postulated as a role for spike-timing-dependent plasticity (STDP). However, the link between optimal plasticity and STDP in cortex remains loose, as does the relationship between STDP and adaptation processes. We investigate how STDP, as described by recent minimal models derived from experimental data, influences the quality of information transmission in an adapting neuron. We show that a phenomenological model based on triplets of spikes yields almost the same information rate as an optimal model specially designed to this end. In contrast, the standard pair-based model of STDP does not improve information transmission as much. This result holds not only for additive STDP with hard weight bounds, known to produce bimodal distributions of synaptic weights, but also for weight-dependent STDP in the context of unimodal but skewed weight distributions. We analyze the similarities between the triplet model and the optimal learning rule, and find that the triplet effect is an important feature of the optimal model when the neuron is adaptive. If STDP is optimized for information transmission, it must take into account the dynamical properties of the postsynaptic cell, which might explain the target-cell specificity of STDP. In particular, it accounts for the differences found in vitro between STDP at excitatory synapses onto principal cells and those onto fast-spiking interneurons.
STDP; plasticity; spike-frequency adaptation; information theory; optimality
A computationally rich algorithm of synaptic plasticity has been proposed based on the experimental observation that the sign and amplitude of the change in synaptic weight is dictated by the temporal order and temporal contiguity between pre- and postsynaptic activities. For more than a decade, this spike-timing-dependent plasticity (STDP) has been studied mainly in brain slices of different brain structures and cultured neurons. Although not yet compelling, evidences for the STDP rule in the intact brain, including primary sensory cortices, have been provided lastly. From insects to mammals, the presentation of precisely timed sensory inputs drives synaptic and functional plasticity in the intact central nervous system, with similar timing requirements than the in vitro defined STDP rule. The convergent evolution of this plasticity rule in species belonging to so distant phylogenic groups points to the efficiency of STDP, as a mechanism for modifying synaptic weights, as the basis of activity-dependent development, learning and memory. In spite of the ubiquity of STDP phenomena, a number of significant variations of the rule are observed in different structures, neuronal types and even synapses on the same neuron, as well as between in vitro and in vivo conditions. In addition, the state of the neuronal network, its ongoing activity and the activation of ascending neuromodulatory systems in different behavioral conditions have dramatic consequences on the expression of spike-timing-dependent synaptic plasticity, and should be further explored.
Hebb; STDP; in vivo; ongoing activity; synaptic plasticity; learning
Vestibulo-ocular reflex (VOR) gain adaptation, a longstanding experimental model of cerebellar learning, utilizes sites of plasticity in both cerebellar cortex and brainstem. However, the mechanisms by which the activity of cortical Purkinje cells may guide synaptic plasticity in brainstem vestibular neurons are unclear. Theoretical analyses indicate that vestibular plasticity should depend upon the correlation between Purkinje cell and vestibular afferent inputs, so that, in gain-down learning for example, increased cortical activity should induce long-term depression (LTD) at vestibular synapses.
Here we expressed this correlational learning rule in its simplest form, as an anti-Hebbian, heterosynaptic spike-timing dependent plasticity interaction between excitatory (vestibular) and inhibitory (floccular) inputs converging on medial vestibular nucleus (MVN) neurons (input-spike-timing dependent plasticity, iSTDP). To test this rule, we stimulated vestibular afferents to evoke EPSCs in rat MVN neurons in vitro. Control EPSC recordings were followed by an induction protocol where membrane hyperpolarizing pulses, mimicking IPSPs evoked by flocculus inputs, were paired with single vestibular nerve stimuli. A robust LTD developed at vestibular synapses when the afferent EPSPs coincided with membrane hyperpolarisation, while EPSPs occurring before or after the simulated IPSPs induced no lasting change. Furthermore, the iSTDP rule also successfully predicted the effects of a complex protocol using EPSP trains designed to mimic classical conditioning.
These results, in strong support of theoretical predictions, suggest that the cerebellum alters the strength of vestibular synapses on MVN neurons through hetero-synaptic, anti-Hebbian iSTDP. Since the iSTDP rule does not depend on post-synaptic firing, it suggests a possible mechanism for VOR adaptation without compromising gaze-holding and VOR performance in vivo.
Since the discovery of place cells – single pyramidal neurons that encode spatial location – it has been hypothesized that the hippocampus may act as a cognitive map of known environments. This putative function has been extensively modeled using auto-associative networks, which utilize rate-coded synaptic plasticity rules in order to generate strong bi-directional connections between concurrently active place cells that encode for neighboring place fields. However, empirical studies using hippocampal cultures have demonstrated that the magnitude and direction of changes in synaptic strength can also be dictated by the relative timing of pre- and post-synaptic firing according to a spike-timing dependent plasticity (STDP) rule. Furthermore, electrophysiology studies have identified persistent “theta-coded” temporal correlations in place cell activity in vivo, characterized by phase precession of firing as the corresponding place field is traversed. It is not yet clear if STDP and theta-coded neural dynamics are compatible with cognitive map theory and previous rate-coded models of spatial learning in the hippocampus. Here, we demonstrate that an STDP rule based on empirical data obtained from the hippocampus can mediate rate-coded Hebbian learning when pre- and post-synaptic activity is stochastic and has no persistent sequence bias. We subsequently demonstrate that a spiking recurrent neural network that utilizes this STDP rule, alongside theta-coded neural activity, allows the rapid development of a cognitive map during directed or random exploration of an environment of overlapping place fields. Hence, we establish that STDP and phase precession are compatible with rate-coded models of cognitive map development.
STDP; hippocampus; spatial memory; synaptic plasticity; auto-associative network; phase precession; navigation
Spike timing-dependent plasticity (STDP) is a cellular model of Hebbian synaptic plasticity which is believed to underlie memory formation. In an attempt to establish a STDP paradigm in CA1 of acute hippocampal slices from juvenile rats (P15–20), we found that changes in excitability resulting from different slice preparation protocols correlate with the success of STDP induction. Slice preparation with sucrose containing ACSF prolonged rise time, reduced frequency adaptation, and decreased latency of action potentials in CA1 pyramidal neurons compared to preparation in conventional ASCF, while other basal electrophysiological parameters remained unaffected. Whereas we observed prominent timing-dependent long-term potentiation (t-LTP) to 171 ± 10% of controls in conventional ACSF, STDP was absent in sucrose prepared slices. This sucrose-induced STDP deficit could not be rescued by stronger STDP paradigms, applying either more pre- and/or postsynaptic stimuli, or by a higher stimulation frequency. Importantly, slice preparation with sucrose containing ACSF did not eliminate theta-burst stimulation induced LTP in CA1 in field potential recordings in our rat hippocampal slices. Application of dopamine (for 10–20 min) to sucrose prepared slices completely rescued t-LTP and recovered action potential properties back to levels observed in ACSF prepared slices. Conversely, acute inhibition of D1 receptor signaling impaired t-LTP in ACSF prepared slices. No similar restoring effect for STDP as seen with dopamine was observed in response to the β-adrenergic agonist isoproterenol. ELISA measurements demonstrated a significant reduction of endogenous dopamine levels (to 61.9 ± 6.9% of ACSF values) in sucrose prepared slices. These results suggest that dopamine signaling is involved in regulating the efficiency to elicit STDP in CA1 pyramidal neurons.
synaptic plasticity; dopamine; isoproterenol; rat; hippocampal slice
Spike timing dependent plasticity (STDP) has been observed experimentally in vitro and is a widely studied neural algorithm for synaptic modification. While the functional role of STDP has been investigated extensively, the effect of rhythms on the precise timing of STDP has not been characterized as well. We use a simplified biophysical model of a cortical network that generates pyramidal interneuronal gamma rhythms (PING). Plasticity via STDP is investigated at the excitatory pyramidal cell synapse from a gamma frequency (30–90 Hz) input independent of the network gamma rhythm. The input may represent a corticocortical or an information-specific thalamocortical connection. This synapse is mediated by N-methyl-D-aspartate receptor mediated (NMDAR) currents. For distinct network and input frequencies, the model shows robust frequency regimes of potentiation and depression, providing a mechanism by which responses to certain inputs can potentiate while responses to other inputs depress. For potentiating regimes, the model suggests an optimal amount and duration of plasticity that can occur, which depends on the time course for the decay of the postsynaptic NMDAR current. Prolonging the duration of the input beyond this optimal time results in depression. Inserting pauses in the input can increase the total potentiation. The optimal pause length corresponds to the decay time of the NMDAR current. Thus, STDP in this model provides a mechanism for potentiation and depression depending on input frequency and suggests that the slow NMDAR current decay helps to regulate the optimal amplitude and duration of the plasticity. The optimal pause length is comparable to the time scale of the negative phase of a modulatory theta rhythm, which may pause gamma rhythm spiking. Our pause results may suggest a novel role for this theta rhythm in plasticity. Finally, we discuss our results in the context of auditory thalamocortical plasticity.
Rhythms are well studied phenomena in many animal species. Brain rhythms in the gamma frequency range (30–90 Hz) are thought to play a role in attention and memory. In this paper, we are interested in how cortical gamma rhythms interact with information specific inputs that also have a significant gamma frequency component. The results from our computational model show that plasticity associated with learning depends on the specific frequencies of the input and cortical gamma rhythms. The results show a mechanism by which both increases and decreases in the strength of the input connection can occur, depending on the specific frequency of the input. A current mediated by NMDA receptors may be responsible for the temporal course of the plasticity seen in these brain regions. We discuss the implications of our results for conditioning paradigms applied to auditory learning.
Spike timing-dependent plasticity (STDP) is a computationally powerful form of plasticity in which synapses are strengthened or weakened according to the temporal order and precise millisecond-scale delay between presynaptic and postsynaptic spiking activity. STDP is readily observed in vitro, but evidence for STDP in vivo is scarce. Here, we studied spike timing-dependent synaptic depression in single putative pyramidal neurons of the rat primary somatosensory cortex (S1) in vivo, using two techniques. First, we recorded extracellularly from layer 2/3 (L2/3) and L5 neurons, and paired spontaneous action potentials (postsynaptic spikes) with subsequent subthreshold deflection of one whisker (to drive presynaptic afferents to the recorded neuron) to produce “post-leading-pre” spike pairings at known delays. Short delay pairings (<17 ms) resulted in a significant decrease of the extracellular spiking response specific to the paired whisker, consistent with spike timing-dependent synaptic depression. Second, in whole-cell recordings from neurons in L2/3, we paired postsynaptic spikes elicited by direct-current injection with subthreshold whisker deflection to drive presynaptic afferents to the recorded neuron at precise temporal delays. Post-leading-pre pairing (<33 ms delay) decreased the slope and amplitude of the PSP evoked by the paired whisker, whereas “pre-leading-post” delays failed to produce depression, and sometimes produced potentiation of whisker-evoked PSPs. These results demonstrate that spike timing-dependent synaptic depression occurs in S1 in vivo, and is therefore a plausible plasticity mechanism in the sensory cortex.
spike-timing dependent plasticity; STDP; somatosensory cortex; plasticity; rat; synaptic depression; LTP; LTD; barrel
It is widely accepted that the direction and magnitude of synaptic plasticity depends on post-synaptic calcium flux, where high levels of calcium lead to long-term potentiation and moderate levels lead to long-term depression. At synapses onto neurons in region CA1 of the hippocampus (and many other synapses), NMDA receptors provide the relevant source of calcium. In this regard, post-synaptic calcium captures the coincidence of pre- and post-synaptic activity, due to the blockage of these receptors at low voltage. Previous studies show that under spike timing dependent plasticity (STDP) protocols, potentiation at CA1 synapses requires post-synaptic bursting and an inter-pairing frequency in the range of the hippocampal theta rhythm. We hypothesize that these requirements reflect the saturation of the mechanisms of calcium extrusion from the post-synaptic spine. We test this hypothesis with a minimal model of NMDA receptor-dependent plasticity, simulating slow extrusion with a calcium-dependent calcium time constant. In simulations of STDP experiments, the model accounts for latency-dependent depression with either post-synaptic bursting or theta-frequency pairing (or neither) and accounts for latency-dependent potentiation when both of these requirements are met. The model makes testable predictions for STDP experiments and our simple implementation is tractable at the network level, demonstrating associative learning in a biophysical network model with realistic synaptic dynamics.
Calcium through NMDA receptors (NMDARs) is necessary for the long-term potentiation (LTP) of synaptic strength; however, NMDARs differ in several properties that can influence the amount of calcium influx into the spine. These properties, such as sensitivity to magnesium block and conductance decay kinetics, change the receptor's response to spike timing dependent plasticity (STDP) protocols, and thereby shape synaptic integration and information processing. This study investigates the role of GluN2 subunit differences on spine calcium concentration during several STDP protocols in a model of a striatal medium spiny projection neuron (MSPN). The multi-compartment, multi-channel model exhibits firing frequency, spike width, and latency to first spike similar to current clamp data from mouse dorsal striatum MSPN. We find that NMDAR-mediated calcium is dependent on GluN2 subunit type, action potential timing, duration of somatic depolarization, and number of action potentials. Furthermore, the model demonstrates that in MSPNs, GluN2A and GluN2B control which STDP intervals allow for substantial calcium elevation in spines. The model predicts that blocking GluN2B subunits would modulate the range of intervals that cause long term potentiation. We confirmed this prediction experimentally, demonstrating that blocking GluN2B in the striatum, narrows the range of STDP intervals that cause long term potentiation. This ability of the GluN2 subunit to modulate the shape of the STDP curve could underlie the role that GluN2 subunits play in learning and development.
The striatum of the basal ganglia plays a key role in fluent motor control; pathology in this structure causes the motor symptoms of Parkinson's Disease and Huntington's Chorea. A putative cellular mechanism underlying learning of motor control is synaptic plasticity, which is an activity dependent change in synaptic strength. A known mediator of synaptic potentiation is calcium influx through the NMDA-type glutamate receptor. The NMDA receptor is sensitive to the timing of neuronal activity, allowing calcium influx only when glutamate release and a post-synaptic depolarization coincide temporally. The NMDA receptor is comprised of specific subunits that modify its sensitivity to neuronal activity and these subunits are altered in animal models of Parkinson's disease. Here we use a multi-compartmental model of a striatal neuron to investigate the effect of different NMDA subunits on calcium influx through the NMDA receptor. Simulations show that the subunit composition changes the temporal intervals that allow coincidence detection and strong calcium influx. Our experiments manipulating the dominate subunit in brain slices show that the subunit effect on calcium influx predicted by our computational model is mirrored by a change in the amount of potentiation that occurs in our experimental preparation.
In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site1.
STDP; memristor; synapses; spikes; learning; nanotechnology; visual cortex; neural network
Spike timing dependent plasticity (STDP) is a temporally specific extension of Hebbian associative plasticity that has tied together the timing of presynaptic inputs relative to the postsynaptic single spike. However, it is difficult to translate this mechanism to in vivo conditions where there is an abundance of presynaptic activity constantly impinging upon the dendritic tree as well as ongoing postsynaptic spiking activity that backpropagates along the dendrite. Theoretical studies have proposed that, in addition to this pre- and postsynaptic activity, a “third factor” would enable the association of specific inputs to specific outputs. Experimentally, the picture that is beginning to emerge, is that in addition to the precise timing of pre- and postsynaptic spikes, this third factor involves neuromodulators that have a distinctive influence on STDP rules. Specifically, neuromodulatory systems can influence STDP rules by acting via dopaminergic, noradrenergic, muscarinic, and nicotinic receptors. Neuromodulator actions can enable STDP induction or – by increasing or decreasing the threshold – can change the conditions for plasticity induction. Because some of the neuromodulators are also involved in reward, a link between STDP and reward-mediated learning is emerging. However, many outstanding questions concerning the relationship between neuromodulatory systems and STDP rules remain, that once solved, will help make the crucial link from timing-based synaptic plasticity rules to behaviorally based learning.
reward; learning; dopamine; acetylcholine; noradrenaline; synaptic plasticity; calcium; behavior
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips.