Lasting alterations in sensory input trigger massive structural and functional adaptations in cortical networks. The principles governing these experience-dependent changes are, however, poorly understood. Here, we examine whether a simple rule based on the neurons' need for homeostasis in electrical activity may serve as driving force for cortical reorganization. According to this rule, a neuron creates new spines and boutons when its level of electrical activity is below a homeostatic set-point and decreases the number of spines and boutons when its activity exceeds this set-point. In addition, neurons need a minimum level of activity to form spines and boutons. Spine and bouton formation depends solely on the neuron's own activity level, and synapses are formed by merging spines and boutons independently of activity. Using a novel computational model, we show that this simple growth rule produces neuron and network changes as observed in the visual cortex after focal retinal lesions. In the model, as in the cortex, the turnover of dendritic spines was increased strongest in the center of the lesion projection zone, while axonal boutons displayed a marked overshoot followed by pruning. Moreover, the decrease in external input was compensated for by the formation of new horizontal connections, which caused a retinotopic remapping. Homeostatic regulation may provide a unifying framework for understanding cortical reorganization, including network repair in degenerative diseases or following focal stroke.
The adult brain is less hard-wired than traditionally thought. About ten percent of synapses in the mature visual cortex is continually replaced by new ones (structural plasticity). This percentage greatly increases after lasting changes in visual input. Due to the topographically organized nerve connections from the retina in the eye to the primary visual cortex in the brain, a small circumscribed lesion in the retina leads to a defined area in the cortex that is deprived of input. Recent experimental studies have revealed that axonal sprouting and dendritic spine turnover are massively increased in and around the cortical area that is deprived of input. However, the driving forces for this structural plasticity remain unclear. Using a novel computational model, we examine whether the need for activity homeostasis of individual neurons may drive cortical reorganization after lasting changes in input activity. We show that homeostatic growth rules indeed give rise to structural and functional reorganization of neuronal networks similar to the cortical reorganization observed experimentally. Understanding the principles of structural plasticity may eventually lead to novel treatment strategies for stimulating functional reorganization after brain damage and neurodegeneration.
Slowly varying activity in the striatum, the main Basal Ganglia input structure, is important for the learning and execution of movement sequences. Striatal medium spiny neurons (MSNs) form cell assemblies whose population firing rates vary coherently on slow behaviourally relevant timescales. It has been shown that such activity emerges in a model of a local MSN network but only at realistic connectivities of and only when MSN generated inhibitory post-synaptic potentials (IPSPs) are realistically sized. Here we suggest a reason for this. We investigate how MSN network generated population activity interacts with temporally varying cortical driving activity, as would occur in a behavioural task. We find that at unrealistically high connectivity a stable winners-take-all type regime is found where network activity separates into fixed stimulus dependent regularly firing and quiescent components. In this regime only a small number of population firing rate components interact with cortical stimulus variations. Around connectivity a transition to a more dynamically active regime occurs where all cells constantly switch between activity and quiescence. In this low connectivity regime, MSN population components wander randomly and here too are independent of variations in cortical driving. Only in the transition regime do weak changes in cortical driving interact with many population components so that sequential cell assemblies are reproducibly activated for many hundreds of milliseconds after stimulus onset and peri-stimulus time histograms display strong stimulus and temporal specificity. We show that, remarkably, this activity is maximized at striatally realistic connectivities and IPSP sizes. Thus, we suggest the local MSN network has optimal characteristics – it is neither too stable to respond in a dynamically complex temporally extended way to cortical variations, nor is it too unstable to respond in a consistent repeatable way. Rather, it is optimized to generate stimulus dependent activity patterns for long periods after variations in cortical excitation.
The striatum forms the main input to the Basal Ganglia (BG), a subcortical structure involved in reinforcement learning and action selection. It is composed of medium spiny neurons (MSNs) which inhibit each other through a network of collaterals, receive excitatory projections from the cerebral cortex, and are the only cells which project outside the striatum. Because of its inhibitory structure, the MSN network is often thought to act selectively, transmitting the most active cortical inputs downstream in the BG while suppressing others. However, studies show that local MSN network connections are too sparse and weak to perform global selection and their function remains puzzling. Here we investigate a different hypothesis. Rather than generating a static stimulus dependent activity pattern, we suggest the MSN network is optimized to generate stimulus dependent dynamical activity patterns for long time periods after variations in cortical excitation. We demonstrate, using simulations, that the MSN network has special characteristics. It is neither too stable to respond in a dynamically complex temporally extended way to cortical variations, nor is it too unstable to respond in a consistent repeatable way. We discuss how these properties may be utilized in temporally delayed reinforcement learning tasks strongly recruiting the striatum.
Increased efforts in the assembly and analysis of connectome data are providing new insights into the principles underlying the connectivity of neural circuits. However, despite these considerable advances in connectomics, neuroanatomical data must be integrated with neurophysiological and behavioral data in order to obtain a complete picture of neural function. Due to its nearly complete wiring diagram and large behavioral repertoire, the nematode worm Caenorhaditis elegans is an ideal organism in which to explore in detail this link between neural connectivity and behavior. In this paper, we develop a neuroanatomically-grounded model of salt klinotaxis, a form of chemotaxis in which changes in orientation are directed towards the source through gradual continual adjustments. We identify a minimal klinotaxis circuit by systematically searching the C. elegans connectome for pathways linking chemosensory neurons to neck motor neurons, and prune the resulting network based on both experimental considerations and several simplifying assumptions. We then use an evolutionary algorithm to find possible values for the unknown electrophsyiological parameters in the network such that the behavioral performance of the entire model is optimized to match that of the animal. Multiple runs of the evolutionary algorithm produce an ensemble of such models. We analyze in some detail the mechanisms by which one of the best evolved circuits operates and characterize the similarities and differences between this mechanism and other solutions in the ensemble. Finally, we propose a series of experiments to determine which of these alternatives the worm may be using.
Maps of the connections between neurons are being assembled for several organisms, including humans. But connectivity alone is insufficient for understanding the mechanisms of behavior. Nowhere is this more obvious than in the nematode C. elegans, where the nearly complete connectome has been available for over 25 years yet little is known about the neural basis of most of its behavior. Here we combine known neuroanatomical constraints from the C. elegans connectome with a simplified body and environment, and use optimization techniques to fill in the missing electrophysiological parameters in plausible ways so as to produce worm-like behavior. We focus on one spatial orientation behavior, where the reactions to sensory input depend on the worm's internal state at the time of the stimulus: salt klinotaxis. By exploring the possibilities for what is unknown in ways that are consistent with what is known, we generate an ensemble of hypotheses about the neural basis of this behavior. Studying the structure of this ensemble, we formulate new experiments that can distinguish between the various hypotheses. This methodology is likely to accelerate the discovery and understanding of the biological circuitry underlying the behavior of interest, before a complete electrophysiological characterization is available.
The ‘communication through coherence’ (CTC) hypothesis proposes that selective communication among neural networks is achieved by coherence between firing rate oscillation in a sending region and gain modulation in a receiving region. Although this hypothesis has stimulated extensive work, it remains unclear whether the mechanism can in principle allow reliable and selective information transfer. Here we use a simple mathematical model to investigate how accurately coherent gain modulation can filter a population-coded target signal from task-irrelevant distracting inputs. We show that selective communication can indeed be achieved, although the structure of oscillatory activity in the target and distracting networks must satisfy certain previously unrecognized constraints. Firstly, the target input must be differentiated from distractors by the amplitude, phase or frequency of its oscillatory modulation. When distracting inputs oscillate incoherently in the same frequency band as the target, communication accuracy is severely degraded because of varying overlap between the firing rate oscillations of distracting inputs and the gain modulation in the receiving region. Secondly, the oscillatory modulation of the target input must be strong in order to achieve a high signal-to-noise ratio relative to stochastic spiking of individual neurons. Thus, whilst providing a quantitative demonstration of the power of coherent oscillatory gain modulation to flexibly control information flow, our results identify constraints imposed by the need to avoid interference between signals, and reveal a likely organizing principle for the structure of neural oscillations in the brain.
Distributed regions of mammalian brains transiently engage in coherent oscillations, often at specific stages of behavioral or cognitive tasks. This activity may play a role in controlling information flow among connected regions, allowing the brain's connectivity structure to be flexibly reconfigured in response to changing task demands. We have used a computational model to investigate the conditions under which oscillations can generate selective communication through a mechanism in which the excitability of neurons in one region is modulated coherently with a firing rate oscillation in another region. Our results demonstrate that this mechanism is able to accurately and selectively control the flow of signals encoded as spatial patterns of firing rate. However, we found that the requirement to avoid interference between different signals imposes previously unrecognised constraints on the structures of oscillatory activity that can efficiently support this mechanism. These constraints may be an organizing principle for the structured oscillatory activity observed in vivo.
Local field potentials (LFPs) are widely used to study the function of local networks in the brain. They are also closely correlated with the blood-oxygen-level-dependent signal, the predominant contrast mechanism in functional magnetic resonance imaging. We developed a new laminar cortex model (LCM) to simulate the amplitude and frequency of LFPs. Our model combines the laminar architecture of the cerebral cortex and multiple continuum models to simulate the collective activity of cortical neurons. The five cortical layers (layer I, II/III, IV, V, and VI) are simulated as separate continuum models between which there are synaptic connections. The LCM was used to simulate the dynamics of the visual cortex under different conditions of visual stimulation. LFPs are reported for two kinds of visual stimulation: general visual stimulation and intermittent light stimulation. The power spectra of LFPs were calculated and compared with existing empirical data. The LCM was able to produce spontaneous LFPs exhibiting frequency-inverse (1/ƒ) power spectrum behaviour. Laminar profiles of current source density showed similarities to experimental data. General stimulation enhanced the oscillation of LFPs corresponding to gamma frequencies. During simulated intermittent light stimulation, the LCM captured the fundamental as well as high order harmonics as previously reported. The power spectrum expected with a reduction in layer IV neurons, often observed with focal cortical dysplasias associated with epilepsy was also simulated.
Local field potentials (LFPs) are low-frequency fluctuations of the electric fields produced by the brain. They have been widely studied to understand brain function and activity. LFPs reflect the activity of neurons within a few square millimeters of the cerebral cortex, an area containing more than 10,000 neurons. To avoid the complexity of simulating such a large number of individual neurons, the continuum cortex model was devised to simulate the collective activity of groups of neurons generating cortical LFPs. However, the continuum cortex model assumes that the cortex is two-dimensional and does not take into account the laminar architecture of the cerebral cortex. We developed a three-dimensional laminar cortex model (LCM) by combining laminar architecture with the continuum cortex model. This expansion enables the LCM to simulate the detailed three-dimensional distribution of the LFP within the cortex. We used the LCM to simulate LFPs within the visual cortex under different conditions of visual stimulation. The LCM reproduced the key features of LFPs observed in electrophysiological experiments. We conclude that the LCM is a potentially useful tool to investigate the underlying mechanism of LFPs.
Despite the current debate about the computational role of experimentally observed precise spike patterns it is still theoretically unclear under which conditions and how they may emerge in neural circuits. Here, we study spiking neural networks with non-additive dendritic interactions that were recently uncovered in single-neuron experiments. We show that supra-additive dendritic interactions enable the persistent propagation of synchronous activity already in purely random networks without superimposed structures and explain the mechanism underlying it. This study adds a novel perspective on the dynamics of networks with nonlinear interactions in general and presents a new viable mechanism for the occurrence of patterns of precisely timed spikes in recurrent networks.
Most nerve cells in neural circuits communicate by sending and receiving short stereotyped electrical pulses called action potentials or spikes. Recent neurophysiological experiments found that under certain conditions the neuronal dendrites (branched projections of the neuron that transmit inputs from other neurons to the cell body (soma)) process input spikes in a nonlinear way: If the inputs arrive within a time window of a few milliseconds, the dendrite can actively generate a dendritic spike that propagates to the neuronal soma and leads to a nonlinearly amplified response. This response is temporally highly precise. Here we consider an analytically tractable model of spiking neural circuits and study the impact of such dendritic nonlinearities on network activity. We find that synchronous spiking activity may robustly propagate through the network, even if it exhibits purely random connectivity without additionally superimposed structures. Such propagation may contribute to the generation of spike patterns that are currently discussed to encode information about internal states and external stimuli in neural circuits.
Accurate timing of action potentials is required for neurons in auditory brainstem nuclei to encode the frequency and phase of incoming sound stimuli. Many such neurons express “high threshold” Kv3-family channels that are required for firing at high rates (>∼200 Hz). Kv3 channels are expressed in gradients along the medial-lateral tonotopic axis of the nuclei. Numerical simulations of auditory brainstem neurons were used to calculate the input-output relations of ensembles of 1–50 neurons, stimulated at rates between 100–1500 Hz. Individual neurons with different levels of potassium currents differ in their ability to follow specific rates of stimulation but all perform poorly when the stimulus rate is greater than the maximal firing rate of the neurons. The temporal accuracy of the combined synaptic output of an ensemble is, however, enhanced by the presence of gradients in Kv3 channel levels over that measured when neurons express uniform levels of channels. Surprisingly, at high rates of stimulation, temporal accuracy is also enhanced by the occurrence of random spontaneous activity, such as is normally observed in the absence of sound stimulation. For any pattern of stimulation, however, greatest accuracy is observed when, in the presence of spontaneous activity, the levels of potassium conductance in all of the neurons is adjusted to that found in the subset of neurons that respond better than their neighbors. This optimization of response by adjusting the K+ conductance occurs for stimulus patterns containing either single and or multiple frequencies in the phase-locking range. The findings suggest that gradients of channel expression are required for normal auditory processing and that changes in levels of potassium currents across the nuclei, by mechanisms such as protein phosphorylation and rapid changes in channel synthesis, adapt the nuclei to the ongoing auditory environment.
In order to detect the nature and location of a sound stimulus, neurons in the central auditory system have to fire at very high rates with extreme temporal precision. Specifically, they have to be able to follow changes in an auditory stimulus at rates of up to 2000 Hz or more and to lock their action potentials to the stimuli with a precision of only a few microseconds. An individual neuron, however, cannot fire at such high rates, and the intrinsic electrical properties of neurons, such as the relative refractory period that follows each action potential, severely limits accuracy of timing at high rates. The intrinsic excitability of neurons is governed by the potassium channels that they express. It has been found in auditory brainstem nuclei that there exist gradients of these channels such that each neuron typically has a different number of channels than its neighbors. In this study, computational models based on measurements in auditory neurons demonstrate that, in the presence of random spontaneous activity such as is normally observed in auditory neurons, rapid adjustments of levels of potassium current within neurons along the gradient are required to allow the ensemble to transmit accurate timing information. The findings suggest that regulation of potassium channels within gradients is an integral component of auditory processing.
Conductance-based equations for electrically active cells form one of the most widely studied mathematical frameworks in computational biology. This framework, as expressed through a set of differential equations by Hodgkin and Huxley, synthesizes the impact of ionic currents on a cell's voltage—and the highly nonlinear impact of that voltage back on the currents themselves—into the rapid push and pull of the action potential. Later studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations or their counterparts. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the equations of Hodgkin-Huxley type. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic equations of Hodgkin-Huxley type as well as to more modern models of ion channel dynamics. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly MATLAB simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.
Many sensory or cognitive events are associated with dynamic current modulations in cortical neurons. This raises an urgent demand for tractable model approaches addressing the merits and limits of potential encoding strategies. Yet, current theoretical approaches addressing the response to mean- and variance-encoded stimuli rarely provide complete response functions for both modes of encoding in the presence of correlated noise. Here, we investigate the neuronal population response to dynamical modifications of the mean or variance of the synaptic bombardment using an alternative threshold model framework. In the variance and mean channel, we provide explicit expressions for the linear and non-linear frequency response functions in the presence of correlated noise and use them to derive population rate response to step-like stimuli. For mean-encoded signals, we find that the complete response function depends only on the temporal width of the input correlation function, but not on other functional specifics. Furthermore, we show that both mean- and variance-encoded signals can relay high-frequency inputs, and in both schemes step-like changes can be detected instantaneously. Finally, we obtain the pairwise spike correlation function and the spike triggered average from the linear mean-evoked response function. These results provide a maximally tractable limiting case that complements and extends previous results obtained in the integrate and fire framework.
Sensory stimuli in our environment are represented in the brain as input current changes to neurons. For example, a periodic bar pattern in the visual field leads to periodic current modulations in the visual cortex. Therefore, models describing the ability of neurons to represent incoming stimuli can offer important clues about how sensory stimuli are processed by the brain. As anyone who has used an old-fashioned radio can attest, there is not just one but multiple ways to encode a signal, e.g. the familiar AM and FM channels. But what are the potential encoding channels in the cortex? A signal could modify the neuronal input current in two distinct ways: it could act either on the mean or the variance of the current. Using a minimal model framework, which can reproduce many features of neuronal activity, we find that both encoding schemes could be equally potent in transmitting slow and fast signals. This allows us to describe how input signals of any functional form give rise to collective firing rate changes in populations of neurons.
Spectro-temporal receptive fields (STRFs) have been widely used as linear approximations to the signal transform from sound spectrograms to neural responses along the auditory pathway. Their dependence on statistical attributes of the stimuli, such as sound intensity, is usually explained by nonlinear mechanisms and models. Here, we apply an efficient coding principle which has been successfully used to understand receptive fields in early stages of visual processing, in order to provide a computational understanding of the STRFs. According to this principle, STRFs result from an optimal tradeoff between maximizing the sensory information the brain receives, and minimizing the cost of the neural activities required to represent and transmit this information. Both terms depend on the statistical properties of the sensory inputs and the noise that corrupts them. The STRFs should therefore depend on the input power spectrum and the signal-to-noise ratio, which is assumed to increase with input intensity. We analytically derive the optimal STRFs when signal and noise are approximated as Gaussians. Under the constraint that they should be spectro-temporally local, the STRFs are predicted to adapt from being band-pass to low-pass filters as the input intensity reduces, or the input correlation becomes longer range in sound frequency or time. These predictions qualitatively match physiological observations. Our prediction as to how the STRFs should be determined by the input power spectrum could readily be tested, since this spectrum depends on the stimulus ensemble. The potentials and limitations of the efficient coding principle are discussed.
Spectro-temporal receptive fields (STRFs) have been widely used as linear approximations of the signal transform from sound spectrograms to neural responses along the auditory pathway. Their dependence on the ensemble of input stimuli has usually been examined mechanistically as a possibly complex nonlinear process. We propose that the STRFs and their dependence on the input ensemble can be understood by an efficient coding principle, according to which the responses of the encoding neurons report the maximum amount of information about the sensory input, subject to limits on the neural cost in representing and transmitting information. This proposal is inspired by the success of the same principle in accounting for receptive fields in the early stages of the visual pathway and their adaptation to input statistics. The principle can account for the STRFs that have been observed, and the way they change with sound intensity. Further, it predicts how the STRFs should change with input correlations, an issue that has not been extensively investigated. In sum, our study provides a computational understanding of the neural transformations of auditory inputs, and makes testable predictions for future experiments.
Neurons spike when their membrane potential exceeds a threshold value. In central neurons, the spike threshold is not constant but depends on the stimulation. Thus, input-output properties of neurons depend both on the effect of presynaptic spikes on the membrane potential and on the dynamics of the spike threshold. Among the possible mechanisms that may modulate the threshold, one strong candidate is Na channel inactivation, because it specifically impacts spike initiation without affecting the membrane potential. We collected voltage-clamp data from the literature and we found, based on a theoretical criterion, that the properties of Na inactivation could indeed cause substantial threshold variability by itself. By analyzing simple neuron models with fast Na inactivation (one channel subtype), we found that the spike threshold is correlated with the mean membrane potential and negatively correlated with the preceding depolarization slope, consistent with experiments. We then analyzed the impact of threshold dynamics on synaptic integration. The difference between the postsynaptic potential (PSP) and the dynamic threshold in response to a presynaptic spike defines an effective PSP. When the neuron is sufficiently depolarized, this effective PSP is briefer than the PSP. This mechanism regulates the temporal window of synaptic integration in an adaptive way. Finally, we discuss the role of other potential mechanisms. Distal spike initiation, channel noise and Na activation dynamics cannot account for the observed negative slope-threshold relationship, while adaptive conductances (e.g. K+) and Na inactivation can. We conclude that Na inactivation is a metabolically efficient mechanism to control the temporal resolution of synaptic integration.
Neurons spike when their combined inputs exceed a threshold value, but recent experimental findings have shown that this value also depends on the inputs. Thus, to understand how neurons respond to input spikes, it is important to know how inputs modify the spike threshold. Spikes are generated by sodium channels, which inactivate when the neuron is depolarized, raising the threshold for spike initiation. We found that inactivation properties of sodium channels could indeed cause substantial threshold variability in central neurons. We then analyzed in models the implications of this form of threshold modulation on neuronal function. We found that this mechanism makes neurons more sensitive to coincident spikes and provides them with an energetically efficient form of gain control.
Spike-timing dependent plasticity (STDP), a widespread synaptic modification mechanism, is sensitive to correlations between presynaptic spike trains and it generates competition among synapses. However, STDP has an inherent instability because strong synapses are more likely to be strengthened than weak ones, causing them to grow in strength until some biophysical limit is reached. Through simulations and analytic calculations, we show that a small temporal shift in the STDP window that causes synchronous, or nearly synchronous, pre- and postsynaptic action potentials to induce long-term depression can stabilize synaptic strengths. Shifted STDP also stabilizes the postsynaptic firing rate and can implement both Hebbian and anti-Hebbian forms of competitive synaptic plasticity. Interestingly, the overall level of inhibition determines whether plasticity is Hebbian or anti-Hebbian. Even a random symmetric jitter of a few milliseconds in the STDP window can stabilize synaptic strengths while retaining these features. The same results hold for a shifted version of the more recent “triplet” model of STDP. Our results indicate that the detailed shape of the STDP window function near the transition from depression to potentiation is of the utmost importance in determining the consequences of STDP, suggesting that this region warrants further experimental study.
Synaptic plasticity is believed to be a fundamental mechanism of learning and memory. In spike-timing dependent synaptic plasticity (STDP), the temporal order of pre- and postsynaptic spiking across a synapse determines whether it is strengthened or weakened. STDP can induce competition between the different inputs synapsing onto a neuron, which is crucial for the formation of functional neuronal circuits. However, strong synaptic competition is often incompatible with inherent synaptic stability. Synaptic modification by STDP is controlled by a so-called temporal window function that determines how synaptic modification depends on spike timing. We show that a small shift, or random jitter, in the conventional temporal window function used for STDP that is compatible with the underlying molecular kinetics of STDP, can both stabilize synapses and maintain competition. The outcome of the competition is determined by the level of inhibitory input to the postsynaptic neuron. We conclude that the detailed shape of the temporal window function is critical in determining the functional consequences of STDP and thus deserves further experimental study.
In central neurons, the threshold for spike initiation can depend on the stimulus and varies between cells and between recording sites in a given cell, but it is unclear what mechanisms underlie this variability. Properties of ionic channels are likely to play a role in threshold modulation. We examined in models the influence of Na channel activation, inactivation, slow voltage-gated channels and synaptic conductances on spike threshold. We propose a threshold equation which quantifies the contribution of all these mechanisms. It provides an instantaneous time-varying value of the threshold, which applies to neurons with fluctuating inputs. We deduce a differential equation for the threshold, similar to the equations of gating variables in the Hodgkin-Huxley formalism, which describes how the spike threshold varies with the membrane potential, depending on channel properties. We find that spike threshold depends logarithmically on Na channel density, and that Na channel inactivation and K channels can dynamically modulate it in an adaptive way: the threshold increases with membrane potential and after every action potential. Our equation was validated with simulations of a previously published multicompartemental model of spike initiation. Finally, we observed that threshold variability in models depends crucially on the shape of the Na activation function near spike initiation (about −55 mV), while its parameters are adjusted near half-activation voltage (about −30 mV), which might explain why many models exhibit little threshold variability, contrary to experimental observations. We conclude that ionic channels can account for large variations in spike threshold.
Neurons communicate primarily with stereotypical electrical impulses, action potentials, which are fired when a threshold level of excitation is reached. This threshold varies between cells and over time as a function of previous stimulations, which has major functional implications on the integrative properties of neurons. Ionic channels are thought to play a central role in this modulation but the precise relationship between their properties and the threshold is unclear. We examined this relationship in biophysical models and derived a formula which quantifies the contribution of various mechanisms. The originality of our approach is that it provides an instantaneous time-varying value for the threshold, which applies to the highly fluctuating regimes characterizing neurons in vivo. In particular, two known ionic mechanisms were found to make the threshold adapt to the membrane potential, thus providing the cell with a form of gain control.
Neurons display a wide range of intrinsic firing patterns. A particularly relevant pattern for neuronal signaling and synaptic plasticity is burst firing, the generation of clusters of action potentials with short interspike intervals. Besides ion-channel composition, dendritic morphology appears to be an important factor modulating firing pattern. However, the underlying mechanisms are poorly understood, and the impact of morphology on burst firing remains insufficiently known. Dendritic morphology is not fixed but can undergo significant changes in many pathological conditions. Using computational models of neocortical pyramidal cells, we here show that not only the total length of the apical dendrite but also the topological structure of its branching pattern markedly influences inter- and intraburst spike intervals and even determines whether or not a cell exhibits burst firing. We found that there is only a range of dendritic sizes that supports burst firing, and that this range is modulated by dendritic topology. Either reducing or enlarging the dendritic tree, or merely modifying its topological structure without changing total dendritic length, can transform a cell's firing pattern from bursting to tonic firing. Interestingly, the results are largely independent of whether the cells are stimulated by current injection at the soma or by synapses distributed over the dendritic tree. By means of a novel measure called mean electrotonic path length, we show that the influence of dendritic morphology on burst firing is attributable to the effect both dendritic size and dendritic topology have, not on somatic input conductance, but on the average spatial extent of the dendritic tree and the spatiotemporal dynamics of the dendritic membrane potential. Our results suggest that alterations in size or topology of pyramidal cell morphology, such as observed in Alzheimer's disease, mental retardation, epilepsy, and chronic stress, could change neuronal burst firing and thus ultimately affect information processing and cognition.
Neurons possess highly branched extensions, called dendrites, which form characteristic tree-like structures. The morphology of these dendritic arborizations can undergo significant changes in many pathological conditions. It is still poorly known, however, how alterations in dendritic morphology affect neuronal activity. Using computational models of pyramidal cells, we study the influence of dendritic tree size and branching structure on burst firing. Burst firing is the generation of two or more action potentials in close succession, a form of neuronal activity that is critically involved in neuronal signaling and synaptic plasticity. We found that there is only a range of dendritic tree sizes that supports burst firing, and that this range is modulated by the branching structure of the tree. We show that shortening as well as lengthening the dendritic tree, or even just modifying the pattern in which the branches in the tree are connected, can shift the cell's firing pattern from bursting to tonic firing, as a consequence of changes in the spatiotemporal dynamics of the dendritic membrane potential. Our results suggest that alterations in pyramidal cell morphology could, via their effect on burst firing, ultimately affect cognition.
In the mammalian hippocampus, the dentate gyrus (DG) is characterized by sparse and powerful unidirectional projections to CA3 pyramidal cells, the so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of the information they convey, what CA3 cells already receive from entorhinal cortex layer II cells, which project both to the dentate gyrus and to CA3. Computational models of episodic memory have hypothesized that the function of the mossy fibers is to enforce a new, well separated pattern of activity onto CA3 cells, to represent a new memory, prevailing over the interference produced by the traces of older memories already stored on CA3 recurrent collateral connections. Can this hypothesis apply also to spatial representations, as described by recent neurophysiological recordings in rats? To address this issue quantitatively, we estimate the amount of information DG can impart on a new CA3 pattern of spatial activity, using both mathematical analysis and computer simulations of a simplified model. We confirm that, also in the spatial case, the observed sparse connectivity and level of activity are most appropriate for driving memory storage – and not to initiate retrieval. Surprisingly, the model also indicates that even when DG codes just for space, much of the information it passes on to CA3 acquires a non-spatial and episodic character, akin to that of a random number generator. It is suggested that further hippocampal processing is required to make full spatial use of DG inputs.
The CA3 region at the core of the hippocampus, a structure crucial to memory formation, presents one striking anatomical feature. Its neurons receive many thousands of weak inputs from other sources, but only a few tens of very strong inputs from the neurons in the directly preceding region, the dentate gyrus. It had been proposed that such sparse connectivity helps the dentate gyrus to drive CA3 activity during the storage of new memories, but why it needs to be so sparse had remained unclear. Recent recordings of neuronal activity in the dentate gyrus (Leutgeb, et al. 2007) show the firing maps of granule cells of rodents engaged in exploration: the few cells active in a given environment, about 3% of the total, present multiple firing fields. Following these findings, we could now construct a network model that addresses the question quantitatively. Both mathematical analysis and computer simulations of the model show that, while the memory system would function also otherwise, connections as sparse as those observed make it function optimally, in terms of the bits of information new memories contain. Much of this information, we show, is encoded however in a difficult format, suggesting that other regions of the hippocampus, until now with no clear role, may contribute to decode it.
Spontaneous retinal activity (known as “waves”) remodels synaptic connectivity to the lateral geniculate nucleus (LGN) during development. Analysis of retinal waves recorded with multielectrode arrays in mouse suggested that a cue for the segregation of functionally distinct (ON and OFF) retinal ganglion cells (RGCs) in the LGN may be a desynchronization in their firing, where ON cells precede OFF cells by one second. Using the recorded retinal waves as input, with two different modeling approaches we explore timing-based plasticity rules for the evolution of synaptic weights to identify key features underlying ON/OFF segregation. First, we analytically derive a linear model for the evolution of ON and OFF weights, to understand how synaptic plasticity rules extract input firing properties to guide segregation. Second, we simulate postsynaptic activity with a nonlinear integrate-and-fire model to compare findings with the linear model. We find that spike-time-dependent plasticity, which modifies synaptic weights based on millisecond-long timing and order of pre- and postsynaptic spikes, fails to segregate ON and OFF retinal inputs in the absence of normalization. Implementing homeostatic mechanisms results in segregation, but only with carefully-tuned parameters. Furthermore, extending spike integration timescales to match the second-long input correlation timescales always leads to ON segregation because ON cells fire before OFF cells. We show that burst-time-dependent plasticity can robustly guide ON/OFF segregation in the LGN without normalization, by integrating pre- and postsynaptic bursts irrespective of their firing order and over second-long timescales. We predict that an LGN neuron will become ON- or OFF-responsive based on a local competition of the firing patterns of neighboring RGCs connecting to it. Finally, we demonstrate consistency with ON/OFF segregation in ferret, despite differences in the firing properties of retinal waves. Our model suggests that diverse input statistics of retinal waves can be robustly interpreted by a burst-based rule, which underlies retinogeniculate plasticity across different species.
Many central targets in the brain are involved in the processing of information from the outside world. Before information about the visual scene reaches the visual cortex, it is preprocessed in the retina and the lateral geniculate nucleus. Connections which relay this information between the different brain targets are not determined at birth, but undergo a developmental period during which they are guided by molecular cues to the correct locations, and refined by activity to the appropriate numbers and strengths. Before the onset of vision, spontaneous activity generated within the retina plays an important role in the remodeling of these connections. In a computational and theoretical model, we used recorded spontaneous retinal activity patterns with several plasticity rules at the retinogeniculate synapse to identify the key properties underlying the selective refinement of connections. Our model shows robust behavior when applied to both mouse and ferret data, demonstrating that a common plasticity rule across species may underlie synaptic refinements in the visual system driven by spontaneous retinal activity.
The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors.
Building artificial vision systems that work robustly in a variety of environments has been difficult, with systems often only performing well under restricted conditions. In contrast, animal vision operates effectively under extremely variable situations. Many attempts to emulate biological vision have met with limited success, often because multiple seemingly appropriate approximations to neural coding resulted in a compromised system. We have constructed a full model for motion processing in the insect visual pathway incorporating known or suspected elements in as much detail as possible. We have found that it is only once all elements are present that the system performs robustly, with reduction or removal of elements dramatically limiting performance. The implementation of this new algorithm could provide a very useful and robust velocity estimator for artificial navigation systems.
The modulation of the sensitivity, or gain, of neural responses to input is an important component of neural computation. It has been shown that divisive gain modulation of neural responses can result from a stochastic shunting from balanced (mixed excitation and inhibition) background activity. This gain control scheme was developed and explored with static inputs, where the membrane and spike train statistics were stationary in time. However, input statistics, such as the firing rates of pre-synaptic neurons, are often dynamic, varying on timescales comparable to typical membrane time constants. Using a population density approach for integrate-and-fire neurons with dynamic and temporally rich inputs, we find that the same fluctuation-induced divisive gain modulation is operative for dynamic inputs driving nonequilibrium responses. Moreover, the degree of divisive scaling of the dynamic response is quantitatively the same as the steady-state responses—thus, gain modulation via balanced conductance fluctuations generalizes in a straight-forward way to a dynamic setting.
Many neural computations, including sensory and motor processing, require neurons to control their sensitivity (often termed ‘gain’) to stimuli. One common form of gain manipulation is divisive gain control, where the neural response to a specific stimulus is simply scaled by a constant. Most previous theoretical and experimental work on divisive gain control have assumed input statistics to be constant in time. However, realistic inputs can be highly time-varying, often with time-varying statistics, and divisive gain control remains to be extended to these cases. A widespread mechanism for divisive gain control for static inputs is through an increase in stimulus independent membrane fluctuations. We address the question of whether this divisive gain control scheme is indeed operative for time-varying inputs. Using simplified spiking neuron models, we employ accurate theoretical methods to estimate the dynamic neural response. We find that gain control via membrane fluctuations does indeed extend to the time-varying regime, and moreover, the degree of divisive scaling does not depend on the timescales of the driving input. This significantly increases the relevance of this form of divisive gain control for neural computations where input statistics change in time, as expected during normal sensory and motor behavior.
Recent data indicate that plasticity protocols have not only synapse-specific but
also more widespread effects. In particular, in synaptic tagging and capture
(STC), tagged synapses can capture plasticity-related proteins, synthesized in
response to strong stimulation of other synapses. This leads to long-lasting
modification of only weakly stimulated synapses. Here we present a biophysical
model of synaptic plasticity in the hippocampus that incorporates several key
results from experiments on STC. The model specifies a set of physical states in
which a synapse can exist, together with transition rates that are affected by
high- and low-frequency stimulation protocols. In contrast to most standard
plasticity models, the model exhibits both early- and late-phase LTP/D,
de-potentiation, and STC. As such, it provides a useful starting point for
further theoretical work on the role of STC in learning and memory.
It is thought that the main biological mechanism of memory corresponds to
long-lasting changes in the strengths, or weights, of synapses between neurons.
The phenomenon of long-term synaptic weight change has been particularly well
documented in the hippocampus, a crucial brain region for the induction of
episodic memory. One important result that has emerged is that the duration of
synaptic weight change depends on the stimulus used to induce it. In particular,
a certain weak stimulus induces a change that lasts for around three hours,
whilst stronger stimuli induce changes that last longer, in some cases as long
as several months. Interestingly, if separate weak and strong stimuli are given
in reasonably quick succession to different synapses of the same neuron, both
synapses exhibit long-lasting change. Here we construct a model of synapses in
the hippocampus that reproduces various data associated with this phenomenon.
The model specifies a set of abstract physical states in which a synapse can
exist as well as probabilities for making transitions between these states. This
paper provides a basis for further studies into the function of the described
Changes in synaptic efficacies need to be long-lasting in order to serve as a
substrate for memory. Experimentally, synaptic plasticity exhibits phases
covering the induction of long-term potentiation and depression (LTP/LTD) during
the early phase of synaptic plasticity, the setting of synaptic tags, a trigger
process for protein synthesis, and a slow transition leading to synaptic
consolidation during the late phase of synaptic plasticity. We present a
mathematical model that describes these different phases of synaptic plasticity.
The model explains a large body of experimental data on synaptic tagging and
capture, cross-tagging, and the late phases of LTP and LTD. Moreover, the model
accounts for the dependence of LTP and LTD induction on voltage and presynaptic
stimulation frequency. The stabilization of potentiated synapses during the
transition from early to late LTP occurs by protein synthesis dynamics that are
shared by groups of synapses. The functional consequence of this shared process
is that previously stabilized patterns of strong or weak synapses onto the same
postsynaptic neuron are well protected against later changes induced by LTP/LTD
protocols at individual synapses.
Humans and animals learn by changing the strength of connections between neurons,
a phenomenon called synaptic plasticity. These changes can be induced by rather
short stimuli (lasting sometimes only a few seconds) but should then be stable
for months or years in order to be useful for long-term memory. Experimentalists
have shown that synapses undergo a sequence of steps that transforms the rapid
change during the early phase of synaptic plasticity into a stable memory trace
in the late phase. In this paper we introduce a model with a small number of
equations that can describe the phenomena of induction of synaptic changes
during the early phase of synaptic plasticity, the trigger process for protein
synthesis, and the final stabilization. The model covers a broad range of
experimental phenomena known as tagging experiments and makes testable
predictions. The ability to model the stabilization of synapses is crucial to
understand learning and memory processes in animals and humans and a necessary
ingredient for any large-scale model of the brain.
There is evidence that biological synapses have a limited number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights, as old memories are automatically overwritten by new memories. Consequently, there has been substantial discussion about how this affects learning and storage capacity. In this paper, we calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. We use this to optimize the learning rules and investigate how the maximum information capacity depends on the number of synapses, the number of synaptic states, and the coding sparseness. Below a certain critical number of synapses per neuron (comparable to numbers found in biology), we find that storage is similar to unbounded, continuous synapses. Hence, discrete synapses do not necessarily have lower storage capacity.
It is believed that the neural basis of learning and memory is change in the strength of synaptic connections between neurons. Much theoretical work on this topic assumes that the strength, or weight, of a synapse may vary continuously and be unbounded. More recent studies have considered synapses that have a limited number of discrete states. In dynamical models of such synapses, old memories are automatically overwritten by new memories, and it has been previously difficult to optimize performance using standard capacity measures, for stronger learning typically implies faster forgetting. Here, we propose an information theoretic measure of storage capacity of such forgetting systems, and use this to optimize the learning rules. We find that for parameters comparable to those found in biology, capacity of discrete synapses is similar to that of unbounded, continuous synapses, provided the number of synapses per neuron is limited. Our findings are relevant for experiments investigating the precise nature of synaptic changes during learning, and also pave a path for further work on building biologically realistic memory models.
N-Methyl-d-aspartic acid (NMDA) receptors are widely expressed in the brain and are critical for many forms of synaptic plasticity. Subtypes of the NMDA receptor NR2 subunit are differentially expressed during development; in the forebrain, the NR2B receptor is dominant early in development, and later both NR2A and NR2B are expressed. In heterologous expression systems, NR2A-containing receptors open more reliably and show much faster opening and closing kinetics than do NR2B-containing receptors. However, conflicting data, showing similar open probabilities, exist for receptors expressed in neurons. Similarly, studies of synaptic plasticity have produced divergent results, with some showing that only NR2A-containing receptors can drive long-term potentiation and others showing that either subtype is capable of driving potentiation. In order to address these conflicting results as well as open questions about the number and location of functional receptors in the synapse, we constructed a Monte Carlo model of glutamate release, diffusion, and binding to NMDA receptors and of receptor opening and closing as well as a model of the activation of calcium-calmodulin kinase II, an enzyme critical for induction of synaptic plasticity, by NMDA receptor-mediated calcium influx. Our results suggest that the conflicting data concerning receptor open probabilities can be resolved, with NR2A- and NR2B-containing receptors having very different opening probabilities. They also support the conclusion that receptors containing either subtype can drive long-term potentiation. We also are able to estimate the number of functional receptors at a synapse from experimental data. Finally, in our models, the opening of NR2B-containing receptors is highly dependent on the location of the receptor relative to the site of glutamate release whereas the opening of NR2A-containing receptors is not. These results help to clarify the previous findings and suggest future experiments to address open questions concerning NMDA receptor function.
Information processing in the brain is carried out by networks of neurons connected by synapses. Synapses can change strength, allowing these networks to adapt and learn, in a process known as synaptic plasticity. At a synapse, an electrical signal in one neuron is converted into a chemical signal, carried by a neurotransmitter, which is in turn converted into electrical and chemical signals in another neuron by specialized proteins called receptors. One such protein, the N-methyl-d-aspartic acid (NMDA) receptor, is particularly important for plasticity, due to its ability to detect the voltage of the cell receiving the neurotransmitter signal and to the fact that it allows calcium, an important signaling molecule, to enter the cell. Here we use computational modeling to investigate the role of one part of the NMDA receptor: the NR2 subunit. The subunit has various forms, and which of these forms are present in the NMDA receptor can strongly affect the kinetics and other properties of the receptor. We show that, along with changing the kinetics of the receptor, changing the NR2 subunit affects the reliability of the receptor, its ability to respond to large stimuli, and its spatial response properties. These results have implications for synaptic transmission and plasticity.
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as
a candidate for a learning rule that could explain how behaviorally relevant
adaptive changes in complex networks of spiking neurons could be achieved in a
self-organizing manner through local synaptic plasticity. However, the
capabilities and limitations of this learning rule could so far only be tested
through computer simulations. This article provides tools for an analytic
treatment of reward-modulated STDP, which allows us to predict under which
conditions reward-modulated STDP will achieve a desired learning effect. These
analytical results imply that neurons can learn through reward-modulated STDP to
classify not only spatial but also temporal firing patterns of presynaptic
neurons. They also can learn to respond to specific presynaptic firing patterns
with particular spike patterns. Finally, the resulting learning theory predicts
that even difficult credit-assignment problems, where it is very hard to tell
which synaptic weights should be modified in order to increase the global reward
for the system, can be solved in a self-organizing manner through
reward-modulated STDP. This yields an explanation for a fundamental experimental
result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys
were rewarded for increasing the firing rate of a particular neuron in the
cortex and were able to solve this extremely difficult credit assignment
problem. Our model for this experiment relies on a combination of
reward-modulated STDP with variable spontaneous firing activity. Hence it also
provides a possible functional explanation for trial-to-trial variability, which
is characteristic for cortical networks of neurons but has no analogue in
currently existing artificial computing systems. In addition our model
demonstrates that reward-modulated STDP can be applied to all synapses in a
large recurrent neural network without endangering the stability of the network
A major open problem in computational neuroscience is to explain how learning,
i.e., behaviorally relevant modifications in the central nervous system, can be
explained on the basis of experimental data on synaptic plasticity.
Spike-timing-dependent plasticity (STDP) is a rule for changes in the strength
of an individual synapse that is supported by experimental data from a variety
of species. However, it is not clear how this synaptic plasticity rule can
produce meaningful modifications in networks of neurons. Only if one takes into
account that consolidation of synaptic plasticity requires a third signal, such
as changes in the concentration of a neuromodulator (that might, for example, be
related to rewards or expected rewards), then meaningful changes in the
structure of networks of neurons may occur. We provide in this article an
analytical foundation for such reward-modulated versions of STDP that predicts
when this type of synaptic plasticity can produce functionally relevant changes
in networks of neurons. In particular we show that seemingly inexplicable
experimental data on biofeedback, where a monkey learnt to increase the firing
rate of an arbitrarily chosen neuron in the motor cortex, can be explained on
the basis of this new learning theory.
Computational modeling of neuronal morphology is a powerful tool for understanding developmental processes and structure-function relationships. We present a multifaceted approach based on stochastic sampling of morphological measures from digital reconstructions of real cells. We examined how dendritic elongation, branching, and taper are controlled by three morphometric determinants: Branch Order, Radius, and Path Distance from the soma. Virtual dendrites were simulated starting from 3,715 neuronal trees reconstructed in 16 different laboratories, including morphological classes as diverse as spinal motoneurons and dentate granule cells. Several emergent morphometrics were used to compare real and virtual trees. Relating model parameters to Branch Order best constrained the number of terminations for most morphological classes, except pyramidal cell apical trees, which were better described by a dependence on Path Distance. In contrast, bifurcation asymmetry was best constrained by Radius for apical, but Path Distance for basal trees. All determinants showed similar performance in capturing total surface area, while surface area asymmetry was best determined by Path Distance. Grouping by other characteristics, such as size, asymmetry, arborizations, or animal species, showed smaller differences than observed between apical and basal, pointing to the biological importance of this separation. Hybrid models using combinations of the determinants confirmed these trends and allowed a detailed characterization of morphological relations. The differential findings between morphological groups suggest different underlying developmental mechanisms. By comparing the effects of several morphometric determinants on the simulation of different neuronal classes, this approach sheds light on possible growth mechanism variations responsible for the observed neuronal diversity.
Neurons in the brain have a variety of complex arbor shapes that help determine both their interconnectivity and functional roles. Molecular biology is beginning to uncover important details on the development of these tree-like structures, but how and why vastly different shapes arise is still largely unknown. We developed a novel set of computer models of branching in which measurements of real nerve cell structures digitally traced from microscopic imaging are resampled to create virtual trees. The different rules that the models use to create the most similar virtual trees to the real data support specific hypotheses regarding development. Surprisingly, the arborizations that differed most in the optimal rules were found on opposite sides of the same type of neuron, namely apical and basal trees of pyramidal cells. The details of the rules suggest that pyramidal cell trees may respond in unique and complex ways to their external environment. By better understanding how these trees are formed in the brain, we can learn more about their normal function and why they are often malformed in neurological diseases.
Chemical synapses transmit information via the release of neurotransmitter-filled vesicles from the presynaptic terminal. Using computational modeling, we predict that the limited availability of neurotransmitter resources in combination with the spontaneous release of vesicles limits the maximum degree of enhancement of synaptic transmission. This gives rise to an optimal tuning that depends on the number of active zones. There is strong experimental evidence that astrocytes that enwrap synapses can modulate the probabilities of vesicle release through bidirectional signaling and hence regulate synaptic transmission. For low-fidelity hippocampal synapses, which typically have only one or two active zones, the predicted optimal values lie close to those determined by experimentally measured astrocytic feedback, suggesting that astrocytes optimize synaptic transmission of information.
Release of chemical (neurotransmitter)-filled vesicles at neuronal junctions called synapses leads to transmission of information between neurons. In a successful synaptic transmission, a voltage spike (action potential) generated by a presynaptic neuron initiates neurotransmitter vesicle release and leads to a small current in the postsynaptic neuron. For many synapses in the central nervous system, the probability that a neurotransmitter vesicle is released in response to an action potential is conspicuously small, raising the question whether transmission failures can in any way prove advantageous. Apart from “induced vesicle release” (in response to an action potential), vesicles are also released asynchronously (in absence of an action potential). An induced release probability that is too small samples the information poorly, as many of the incoming action potentials do not result in a postsynaptic current response. Maximizing induced release in order to maximize information transmission at a synapse is accompanied by the exceptionable outcome of increased asynchronous release; in addition, both these releases draw from the same neurotransmitter resource pool. A large release rate thus comprising both induced as well as asynchronous release of vesicles can suppress synaptic transmission via either depletion of neurotransmitter resources or desensitization of postsynaptic receptors. In this paper, we propose that the competing dynamics of induced and asynchronous vesicle release gives rise to an optimal release probability. Further, by comparing experimental data of astrocyte-enhanced synaptic transmission with simulations, we argue that synapses enwrapped by astrocytes operate close to our predicted optimum. This optimality is achieved through a closed-loop control circuitry that involves the presynaptic neuron and the synaptic astrocyte.