This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of ‘lower-level’ computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.
The ability of mammals to navigate is well studied, both behaviourally and in terms on the underlying neurophysiology. Navigation is a well studied topic in computational fields such as machine learning and signal processing. However, studies in computational neuroscience, which draw together these findings, have mainly focused on specific navigation tasks such as spatial localisation. In this paper, we propose a single probabilistic model which can support multiple tasks, from working out which environment you are in, to computing a sequence of motor commands that will take you to a sensory goal, such as being warm or viewing a particular object. We describe how these tasks can be implemented using a common set of lower level algorithms that implement ‘forward and backward inference over time’. We relate these algorithms to recent findings in animal electrophysiology, where sequences of hippocampal cell activations are observed before, during or after a navigation task, and these sequences are played either forwards or backwards. Additionally, one function of the hippocampus that is preserved across mammals is that it integrates spatial and non-spatial information, and we propose how the forward and backward inference algorithms naturally map onto this architecture.
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture.
Many lines of evidence suggest that few spikes carry the relevant stimulus information at later stages of sensory processing. Yet mechanisms for the emergence of a robust and temporally sparse sensory representation remain elusive. Here, we introduce an idea in which a temporal sparse and reliable stimulus representation develops naturally in spiking networks. It combines principles of signal propagation with the commonly observed mechanism of neuronal firing rate adaptation. Using a stringent numerical and mathematical approach, we show how a dense rate code at the periphery translates into a temporal sparse representation in the cortical network. At the same time, it dynamically suppresses trial-by-trial variability, matching experimental observations in sensory cortices. Computational modelling of the insects olfactory pathway suggests that the same principle underlies the prominent example of temporal sparse coding in the mushroom body. Our results reveal a computational principle that relates neuronal firing rate adaptation to temporal sparse coding and variability suppression in nervous systems.
Cancellation of redundant information is a highly desirable feature of sensory systems, since it would potentially lead to a more efficient detection of novel information. However, biologically plausible mechanisms responsible for such selective cancellation, and especially those robust to realistic variations in the intensity of the redundant signals, are mostly unknown. In this work, we study, via in vivo experimental recordings and computational models, the behavior of a cerebellar-like circuit in the weakly electric fish which is known to perform cancellation of redundant stimuli. We experimentally observe contrast invariance in the cancellation of spatially and temporally redundant stimuli in such a system. Our model, which incorporates heterogeneously-delayed feedback, bursting dynamics and burst-induced STDP, is in agreement with our in vivo observations. In addition, the model gives insight on the activity of granule cells and parallel fibers involved in the feedback pathway, and provides a strong prediction on the parallel fiber potentiation time scale. Finally, our model predicts the existence of an optimal learning contrast around 15% contrast levels, which are commonly experienced by interacting fish.
The ability to cancel redundant information is an important feature of many sensory systems. Cancellation mechanisms in neural systems, however, are not well understood, especially when considering realistic conditions such as signals with different intensities. In this work, we study, employing experimental recordings and computational models, a cerebellar-like circuit in the brain of the weakly electric fish which is able to perform such a cancellation. We observe that in vivo recorded neurons in this circuit display a contrast-invariant cancellation of redundant stimuli. We employ a mathematical model to explain this phenomenon, and also to gain insight into several dynamics of the circuit which have not been experimentally measured to date. Interestingly, our model predicts that time-averaged contrast levels of around 15%, which are commonly experienced by interacting fish, would shape the circuit to behave as observed experimentally.
Nicotine exerts its reinforcing action by stimulating nicotinic acetylcholine receptors (nAChRs) and boosting dopamine (DA) output from the ventral tegmental area (VTA). Recent data have led to a debate about the principal pathway of nicotine action: direct stimulation of the DAergic cells through nAChR activation, or disinhibition mediated through desensitization of nAChRs on GABAergic interneurons. We use a computational model of the VTA circuitry and nAChR function to shed light on this issue. Our model illustrates that the α4β2-containing nAChRs either on DA or GABA cells can mediate the acute effects of nicotine. We account for in vitro as well as in vivo data, and predict the conditions necessary for either direct stimulation or disinhibition to be at the origin of DA activity increases. We propose key experiments to disentangle the contribution of both mechanisms. We show that the rate of endogenous acetylcholine input crucially determines the evoked DA response for both mechanisms. Together our results delineate the mechanisms by which the VTA mediates the acute rewarding properties of nicotine and suggest an acetylcholine dependence hypothesis for nicotine reinforcement.
Nicotine is the major addictive substance in tobacco smoke. Nicotine exerts its control over neural circuits through nicotinic acetylcholine receptors that normally respond to endogenous acetylcholine. Activation of dopamine neurons in the mesolimbic dopaminergic circuits, which signal motivational properties of actions and stimuli, is at the heart of mediating nicotine reward and dependence. However, major questions have remained unsettled over the precise mechanisms by which nicotine usurps dopaminergic signaling: through receptor activation on dopamine neurons or through receptor desensitization on local inhibitory interneurons. Here we reconcile this debate by showing that both mechanisms are possible. Most notably we present a novel hypothesis suggesting that the mechanisms for nicotine action are state-dependent; they are controlled by the rate of the endogenous cholinergic input to the dopaminergic circuits.
In this paper, we study the dynamics of a quadratic integrate-and-fire neuron, spiking in the gamma (30–100 Hz) range, coupled to a delta/theta frequency (1–8 Hz) neural oscillator. Using analytical and semianalytical methods, we were able to derive characteristic spiking times for the system in two distinct regimes (depending on parameter values): one regime where the gamma neuron is intrinsically oscillating in the absence of theta input, and a second one in which gamma spiking is directly gated by theta input, i.e., windows of gamma activity alternate with silence periods depending on the underlying theta phase. In the former case, we transform the equations such that the system becomes analogous to the Mathieu differential equation. By solving this equation, we can compute numerically the time to the first gamma spike, and then use singular perturbation theory to find successive spike times. On the other hand, in the excitable condition, we make direct use of singular perturbation theory to obtain an approximation of the time to first gamma spike, and then extend the result to calculate ensuing gamma spikes in a recursive fashion. We thereby give explicit formulas for the onset and offset of gamma spike burst during a theta cycle, and provide an estimation of the total number of spikes per theta cycle both for excitable and oscillator regimes.
Oscillations; PING; Dynamical systems; Geometric singular perturbation theory; Blow-up method; Spike times; Theta-gamma rhythms; Type I neuron; SNIC bifurcation
Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain.
The cerebral cortex is central to many aspects of cognition and intelligence in humans and other mammals, but our scientific understanding of the computational principles underlying cortical processing is still limited. We might gain insights by considering visual hallucinations, specifically in a pathology known as Charles Bonnet syndrome, where patients suffering from visual impairment experience hallucinatory images that rival the vividness and complexity of normal seeing. Such generation of rich internal imagery could naturally be accounted for by theories that posit that the cortex implements an internal generative model of sensory input. Perception then could entail the synthesis of internal explanations that are evaluated by testing whether what they predict is consistent with actual sensory input. Here, we take an approach from artificial intelligence that is based on similar ideas, the deep Boltzmann machine, use it as a model of generative processing in the cortex, and examine various aspects of Charles Bonnet syndrome in computer simulations. In particular, we explain why the synthesis of internal explanations, which is so useful for perception, goes astray in the syndrome as neurons overcompensate for the lack of sensory input by increasing spontaneous activity.
Despite explicitly wanting to quit, long-term addicts find themselves powerless to resist drugs, despite knowing that drug-taking may be a harmful course of action. Such inconsistency between the explicit knowledge of negative consequences and the compulsive behavioral patterns represents a cognitive/behavioral conflict that is a central characteristic of addiction. Neurobiologically, differential cue-induced activity in distinct striatal subregions, as well as the dopamine connectivity spiraling from ventral striatal regions to the dorsal regions, play critical roles in compulsive drug seeking. However, the functional mechanism that integrates these neuropharmacological observations with the above-mentioned cognitive/behavioral conflict is unknown. Here we provide a formal computational explanation for the drug-induced cognitive inconsistency that is apparent in the addicts' “self-described mistake”. We show that addictive drugs gradually produce a motivational bias toward drug-seeking at low-level habitual decision processes, despite the low abstract cognitive valuation of this behavior. This pathology emerges within the hierarchical reinforcement learning framework when chronic exposure to the drug pharmacologically produces pathologicaly persistent phasic dopamine signals. Thereby the drug hijacks the dopaminergic spirals that cascade the reinforcement signals down the ventro-dorsal cortico-striatal hierarchy. Neurobiologically, our theory accounts for rapid development of drug cue-elicited dopamine efflux in the ventral striatum and a delayed response in the dorsal striatum. Our theory also shows how this response pattern depends critically on the dopamine spiraling circuitry. Behaviorally, our framework explains gradual insensitivity of drug-seeking to drug-associated punishments, the blocking phenomenon for drug outcomes, and the persistent preference for drugs over natural rewards by addicts. The model suggests testable predictions and beyond that, sets the stage for a view of addiction as a pathology of hierarchical decision-making processes. This view is complementary to the traditional interpretation of addiction as interaction between habitual and goal-directed decision systems.
The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks.
Whether spike timing conveys information in cortical networks or whether the firing rate alone is sufficient is a matter of controversial debate, touching the fundamental question of how the brain processes, stores, and conveys information. If the firing rate alone is the decisive signal used in the brain, correlations between action potentials are just an epiphenomenon of cortical connectivity, where pairs of neurons share a considerable fraction of common afferents. Due to membrane leakage, small synaptic amplitudes and the non-linear threshold, nerve cells exhibit lossy transmission of correlation originating from shared synaptic inputs. However, the membrane potential of cortical neurons often displays non-Gaussian fluctuations, caused by synchronized synaptic inputs. Moreover, synchronously active neurons have been found to reflect behavior in primates. In this work we therefore contrast the transmission of correlation due to shared afferents and due to synchronously arriving synaptic impulses for leaky neuron models. We not only find that neurons are highly sensitive to synchronous afferents, but that they can suppress noise on signals transmitted by synchrony, a computational advantage over rate signals.
The basal ganglia is a brain region critically involved in reinforcement learning and motor control. Synaptic plasticity in the striatum of the basal ganglia is a cellular mechanism implicated in learning and neuronal information processing. Therefore, understanding how different spatio-temporal patterns of synaptic input select for different types of plasticity is key to understanding learning mechanisms. In striatal medium spiny projection neurons (MSPN), both long term potentiation (LTP) and long term depression (LTD) require an elevation in intracellular calcium concentration; however, it is unknown how the post-synaptic neuron discriminates between different patterns of calcium influx. Using computer modeling, we investigate the hypothesis that temporal pattern of stimulation can select for either endocannabinoid production (for LTD) or protein kinase C (PKC) activation (for LTP) in striatal MSPNs. We implement a stochastic model of the post-synaptic signaling pathways in a dendrite with one or more diffusionally coupled spines. The model is validated by comparison to experiments measuring endocannabinoid-dependent depolarization induced suppression of inhibition. Using the validated model, simulations demonstrate that theta burst stimulation, which produces LTP, increases the activation of PKC as compared to 20 Hz stimulation, which produces LTD. The model prediction that PKC activation is required for theta burst LTP is confirmed experimentally. Using the ratio of PKC to endocannabinoid production as an index of plasticity direction, model simulations demonstrate that LTP exhibits spine level spatial specificity, whereas LTD is more diffuse. These results suggest that spatio-temporal control of striatal information processing employs these Gq coupled pathways.
Change in the strength of connections between brain cells in the basal ganglia is a mechanism implicated in learning and information processing. Learning to associate a sensory input or motor action with reward likely causes certain patterns of input to strengthen connections, a phenomenon known as long term potentiation (LTP), and other patterns of input to weaken those connections, known as long term depression (LTD). Both LTP and LTD require elevations in calcium, and a critical question is whether different patterns of input cause different patterns of calcium dynamics or activate different downstream molecules. To address this issue we develop a spatial, computational model of the signaling pathways in a dendrite with multiple spines. Model simulations show that stimulation patterns that produce LTP experimentally activate more protein kinase C than stimulation patterns that produce LTD. We experimentally confirm the model prediction that protein kinase C is required for LTP. The model also predicts that protein kinase C exhibits spatial specificity while endocanabinoids do not.
Systemic administration of nicotine increases dopaminergic (DA) neuron firing in the ventral tegmental area (VTA), which is thought to underlie nicotine reward. Here, we report that the medial prefrontal cortex (mPFC) plays a critical role in nicotine-induced excitation of VTA DA neurons. In chloral hydrate-anesthetized rats, extracellular single-unit recordings showed that VTA DA neurons exhibited two types of firing responses to systemic nicotine. After nicotine injection, the neurons with type-I response showed a biphasic early inhibition and later excitation, whereas the neurons with type-II response showed a monophasic excitation. The neurons with type-I, but not type-II, response exhibited pronounced slow oscillations (SO) in firing. Pharmacological or structural mPFC inactivation abolished SO and prevented systemic nicotine-induced excitation in the neurons with type-I, but not type-II, response, suggesting that these VTA DA neurons are functionally coupled to the mPFC and nicotine increases firing rate in these neurons in part through the mPFC. Systemic nicotine also increased the firing rate and SO in mPFC pyramidal neurons. mPFC infusion of a non-α7 nAChR antagonist mecamylamine blocked the excitatory effect of systemic nicotine on the VTA DA neurons with type-I response, but mPFC infusion of nicotine failed to excite these neurons. These results suggest that nAChR activation in the mPFC is necessary, but not sufficient, for systemic nicotine-induced excitation of VTA neurons. Finally, systemic injection of bicuculline prevented nicotine-induced firing alterations in the neurons with type-I response. We propose that the mPFC plays a critical role in systemic nicotine-induced excitation of VTA DA neurons.
nicotine; prefrontal cortex; ventral tegmental area; dopamine neuron; in vivo recording; slow oscillation
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.
Classical views on single neuron computation treat dendrites as mere collectors of inputs, that is forwarded to the soma for linear summation and causes a spike output if it is sufficiently large. Such a single neuron model can only compute linearly separable input-output functions, representing a small fraction of all possible functions. Recent experimental findings show that in certain pyramidal cells excitatory inputs can be supra-linearly integrated within a dendritic branch, turning this branch into a spiking dendritic sub-unit. Neurons containing many of these dendritic sub-units can compute both linearly separable and linearly non-separable functions. Nevertheless, other neuron types have dendrites which do not spike because the required voltage gated channels are absent. However, these dendrites sub-linearly sum excitatory inputs turning branches into saturating sub-units. We wanted to test if this last type of non-linear summation is sufficient for a single neuron to compute linearly non-separable functions. Using a combination of Boolean algebra and biophysical modeling, we show that a neuron with a single non-linear dendritic sub-unit whether spiking or saturating is able to compute linearly non-separable functions. Thus, in principle, any neuron with a dendritic tree, even passive, can compute linearly non-separable functions.
Spike timing-dependent plasticity (STDP) modifies synaptic strengths based on timing information available locally at each synapse. Despite this, it induces global structures within a recurrently connected network. We study such structures both through simulations and by analyzing the effects of STDP on pair-wise interactions of neurons. We show how conventional STDP acts as a loop-eliminating mechanism and organizes neurons into in- and out-hubs. Loop-elimination increases when depression dominates and turns into loop-generation when potentiation dominates. STDP with a shifted temporal window such that coincident spikes cause depression enhances recurrent connections and functions as a strict buffering mechanism that maintains a roughly constant average firing rate. STDP with the opposite temporal shift functions as a loop eliminator at low rates and as a potent loop generator at higher rates. In general, studying pairwise interactions of neurons provides important insights about the structures that STDP can produce in large networks.
The connectivity structure in neural networks reflects, at least in part, the long-term effects of synaptic plasticity mechanisms that underlie learning and memory. In one of the most widespread such mechanisms, spike-timing dependent plasticity (STDP), the temporal order of pre- and postsynaptic spiking across a synapse determines whether it is strengthened or weakened. Therefore, the synapses are modified solely based on local information through STDP. However, STDP can give rise to a variety of global connectivity structures in an interconnected neural network. Here, we provide an analytical framework that can predict the global structures that arise from STDP in such a network. The analytical technique we develop is actually quite simple, and involves the study of two interconnected neurons receiving inputs from their surrounding network. Following analytical calculations for a variety of different STDP models, we test and verify all our predictions through full network simulations. More importantly, the developed analytical tool will allow other researchers to figure out what arises from any other type of STDP in a network.
Working memory (WM) requires selective information gating, active information maintenance, and rapid active updating. Hence performing a WM task needs rapid and controlled transitions between neural persistent activity and the resting state. We propose that changes in correlations in neural activity provides a mechanism for the required WM operations. As a proof of principle, we implement sustained activity and WM in recurrently coupled spiking networks with neurons receiving excitatory random background activity where background correlations are induced by a common noise source. We first characterize how the level of background correlations controls the stability of the persistent state. With sufficiently high correlations, the sustained state becomes practically unstable, so it cannot be initiated by a transient stimulus. We exploit this in WM models implementing the delay match to sample task by modulating flexibly in time the correlation level at different phases of the task. The modulation sets the network in different working regimes: more prompt to gate in a signal or clear the memory. We examine how the correlations affect the ability of the network to perform the task when distractors are present. We show that in a winner-take-all version of the model, where two populations cross-inhibit, correlations make the distractor blocking robust. In a version of the mode where no cross inhibition is present, we show that appropriate modulation of correlation levels is sufficient to also block the distractor access while leaving the relevant memory trace in tact. The findings presented in this manuscript can form the basis for a new paradigm about how correlations are flexibly controlled by the cortical circuits to execute WM operations.
correlations; background activity; working memory; spiking neural network; persistent activity
Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies.
Mathematical models are developed to better understand interactions between components of a system that together govern the overall behavior. Mathematical models of sleep have helped to elucidate the neuronal cell groups that are involved in promoting sleep and wake behavior and the transitions between them. However, to be able to take full advantage of these models one must be able to estimate the value of all included variables accurately. Data assimilation refers to methods that allow the user to combine noisy measurements of just a few system variables with the mathematical model of that system to estimate all variables, including those originally inaccessible for measurement. Using these techniques we show that we can reconstruct the unmeasured variables and parameters of a mathematical model of the sleep-wake network. These reconstructed estimates can then be used to better understand the underlying neuronal behavior that results in sleep and wake activity. Because sleep is implicated in a wide array of neurological disorders from epilepsy to schizophrenia, we anticipate that this framework will enable better understanding of the link between sleep and the rest of the brain and provide for better, more targeted, therapies.
The dynamics of circadian rhythms needs to be adapted to day length changes between summer and winter. It has been observed experimentally, however, that the dynamics of individual neurons of the suprachiasmatic nucleus (SCN) does not change as the seasons change. Rather, the seasonal adaptation of the circadian clock is hypothesized to be a consequence of changes in the intercellular dynamics, which leads to a phase distribution of electrical activity of SCN neurons that is narrower in winter and broader during summer. Yet to understand this complex intercellular dynamics, a more thorough understanding of the impact of the network structure formed by the SCN neurons is needed. To that effect, we propose a mathematical model for the dynamics of the SCN neuronal architecture in which the structure of the network plays a pivotal role. Using our model we show that the fraction of long-range cell-to-cell connections and the seasonal changes in the daily rhythms may be tightly related. In particular, simulations of the proposed mathematical model indicate that the fraction of long-range connections between the cells adjusts the phase distribution and consequently the length of the behavioral activity as follows: dense long-range connections during winter lead to a narrow activity phase, while rare long-range connections during summer lead to a broad activity phase. Our model is also able to account for the experimental observations indicating a larger light-induced phase-shift of the circadian clock during winter, which we show to be a consequence of higher synchronization between neurons. Our model thus provides evidence that the variations in the seasonal dynamics of circadian clocks can in part also be understood and regulated by the plasticity of the SCN network structure.
Circadian clocks drive the temporal coordination of internal biological processes, which in turn determine daily rhythms in physiology and behavior in the most diverse organisms. In mammals, the 24-hour timing clock resides in the suprachiasmatic nucleus (SCN) of the hypothalamus. The SCN is a network of interconnected neurons that serves as a robust self-sustained circadian pacemaker. The electrical activity of these neurons and their synchronization with the 24-hour cycle is established via the environmental day and night cycles. Apart from daily luminance changes, mammals are exposed to seasonal day length changes as well. Remarkably, it has been shown experimentally that the seasonal adaptations to different photoperiods are related to the modifications of the neuronal activity of the SCN due to the plasticity of the network. In our paper, by developing a mathematical model of the SCN architecture, we explore in depth the role of the structure of this important neuronal network. We show that the redistribution of the neuronal activity during winter and summer can in part be explained by structural changes of the network. Interestingly, the alterations of the electrical activity patterns can be related with small-world properties of our proposed SCN network.
Neocortical pyramidal neurons (PNs) receive thousands of excitatory synaptic contacts on their basal dendrites. Some act as classical driver inputs while others are thought to modulate PN responses based on sensory or behavioral context, but the biophysical mechanisms that mediate classical-contextual interactions in these dendrites remain poorly understood. We hypothesized that if two excitatory pathways bias their synaptic projections towards proximal vs. distal ends of the basal branches, the very different local spike thresholds and attenuation factors for inputs near and far from the soma might provide the basis for a classical-contextual functional asymmetry. Supporting this possibility, we found both in compartmental models and electrophysiological recordings in brain slices that the responses of basal dendrites to spatially separated inputs are indeed strongly asymmetric. Distal excitation lowers the local spike threshold for more proximal inputs, while having little effect on peak responses at the soma. In contrast, proximal excitation lowers the threshold, but also substantially increases the gain of distally-driven responses. Our findings support the view that PN basal dendrites possess significant analog computing capabilities, and suggest that the diverse forms of nonlinear response modulation seen in the neocortex, including uni-modal, cross-modal, and attentional effects, could depend in part on pathway-specific biases in the spatial distribution of excitatory synaptic contacts onto PN basal dendritic arbors.
Pyramidal neurons (PNs) are the principal neurons of the cerebral cortex and therefore lie at the heart of the brain's higher sensory, motor, affective, memory, and executive functions. But how do they work? In particular, how do they manage interactions between the classical “driver” inputs that give rise to their basic response properties, and “contextual” inputs that nonlinearly modulate those responses? It is known that PNs are contacted by thousands of excitatory synapses scattered about their dendrites, but despite decades of research, the “rules” that govern how inputs at different locations in the dendritic tree combine to influence the cell's firing rate remain poorly understood. We show here that two excitatory inputs contacting the same dendrite interact in an asymmetric nonlinear way that depends on their absolute and relative locations, where the resulting spectrum of location-dependent synaptic interactions constitutes a previously unknown form of spatial analog computation. In addition to suggesting a possible substrate for classical-contextual interactions in PN dendrites, our results imply that the computing functions of cortical circuits can only be fully understood when the detailed map of synaptic connectivity – the cortical connectome – is known down to the subdendritic level.
Cortical computations are critically dependent on interactions between pyramidal neurons (PNs) and a menagerie of inhibitory interneuron types. A key feature distinguishing interneuron types is the spatial distribution of their synaptic contacts onto PNs, but the location-dependent effects of inhibition are mostly unknown, especially under conditions involving active dendritic responses. We studied the effect of somatic vs. dendritic inhibition on local spike generation in basal dendrites of layer 5 PNs both in neocortical slices and in simple and detailed compartmental models, with equivalent results: somatic inhibition divisively suppressed the amplitude of dendritic spikes recorded at the soma while minimally affecting dendritic spike thresholds. In contrast, distal dendritic inhibition raised dendritic spike thresholds while minimally affecting their amplitudes. On-the-path dendritic inhibition modulated both the gain and threshold of dendritic spikes depending on its distance from the spike initiation zone. Our findings suggest that cortical circuits could assign different mixtures of gain vs. threshold inhibition to different neural pathways, and thus tailor their local computations, by managing their relative activation of soma- vs. dendrite-targeting interneurons.
Establishing how inhibitory neurons shape the computing functions of neural circuits is crucial to understanding both normal function and dysfunction in the human brain. It has been known for over a century that different classes of inhibitory interneurons project to different sub-regions of the neurons they contact – some primarily target cell bodies, others the dendrites, still others the axon. It remains poorly understood, however, how these different projection patterns influence synaptic integration in the target neuron populations. By providing new data from intracellular recordings in brain slices, and a simple but powerful model of the location-dependent effects of inhibition on dendritic spike generation, our study (1) demonstrates the importance of the absolute and relative locations of excitatory and inhibitory inputs to pyramidal neurons, the principal cells of the cerebral cortex, and (2) helps to establish a more solid theoretical understanding of these complex integrative phenomena. As high resolution mapping of the cortical “connectome” becomes available in the coming years, our work will be helpful in interpreting the computing functions of cortical tissue both at the single neuron and circuit levels.
Pain caused by nerve injury (i.e. neuropathic pain) is associated with development of neuronal hyperexcitability at several points along the pain pathway. Within primary afferents, numerous injury-induced changes have been identified but it remains unclear which molecular changes are necessary and sufficient to explain cellular hyperexcitability. To investigate this, we built computational models that reproduce the switch from a normal spiking pattern characterized by a single spike at the onset of depolarization to a neuropathic one characterized by repetitive spiking throughout depolarization. Parameter changes that were sufficient to switch the spiking pattern also enabled membrane potential oscillations and bursting, suggesting that all three pathological changes are mechanistically linked. Dynamical analysis confirmed this prediction by showing that excitability changes co-develop when the nonlinear mechanism responsible for spike initiation switches from a quasi-separatrix-crossing to a subcritical Hopf bifurcation. This switch stems from biophysical changes that bias competition between oppositely directed fast- and slow-activating conductances operating at subthreshold potentials. Competition between activation and inactivation of a single conductance can be similarly biased with equivalent consequences for excitability. “Bias” can arise from a multitude of molecular changes occurring alone or in combination; in the latter case, changes can add or offset one another. Thus, our results identify pathological change in the nonlinear interaction between processes affecting spike initiation as the critical determinant of how simple injury-induced changes at the molecular level manifest complex excitability changes at the cellular level. We demonstrate that multiple distinct molecular changes are sufficient to produce neuropathic changes in excitability; however, given that nerve injury elicits numerous molecular changes that may be individually sufficient to alter spike initiation, our results argue that no single molecular change is necessary to produce neuropathic excitability. This deeper understanding of degenerate causal relationships has important implications for how we understand and treat neuropathic pain.
Neuropathic pain results from damage to the nervous system. Much is known about the multitude of molecular and cellular changes that are triggered by nerve injury (and which correlate with development of neuropathic pain), but little is understood about how those changes cause neuropathic pain. Rather than identifying what changes occur after nerve injury (which has already been the focus of countless studies), our study focuses on identifying which changes are functionally important. Specifically, we explain how certain molecular changes, acting alone or in combination, cause a triad of neuropathic changes in primary afferent excitability. Through computational modeling and nonlinear dynamical analysis, we demonstrate that the entire triad of excitability changes arises from a single switch in the nonlinear mechanism responsible for spike initiation. Going further, we demonstrate that many distinct molecular changes are sufficient to produce that switch but that no single molecular change is necessary if more than one sufficient change co-occurs after nerve injury, which appears to be the case. The issue becomes whether molecular changes combine to reach some tipping point whereupon cellular excitability is qualitatively altered. This highlights the importance of nonlinearities for neuropathic pain and the need for more computational pain research.
The reshaping and decorrelation of similar activity patterns by neuronal networks can enhance their discriminability, storage, and retrieval. How can such networks learn to decorrelate new complex patterns, as they arise in the olfactory system? Using a computational network model for the dominant neural populations of the olfactory bulb we show that fundamental aspects of the adult neurogenesis observed in the olfactory bulb – the persistent addition of new inhibitory granule cells to the network, their activity-dependent survival, and the reciprocal character of their synapses with the principal mitral cells – are sufficient to restructure the network and to alter its encoding of odor stimuli adaptively so as to reduce the correlations between the bulbar representations of similar stimuli. The decorrelation is quite robust with respect to various types of perturbations of the reciprocity. The model parsimoniously captures the experimentally observed role of neurogenesis in perceptual learning and the enhanced response of young granule cells to novel stimuli. Moreover, it makes specific predictions for the type of odor enrichment that should be effective in enhancing the ability of animals to discriminate similar odor mixtures.
The olfactory bulb is one of only two brain regions in which new neurons are added persistently in substantial numbers even in adult animals. This leads to an ongoing turnover of interneurons, in particular of the inhibitory granule cells, which constitute the largest cell population of the olfactory bulb. The function of this adult neurogenesis in olfactory processing is only poorly understood. Experiments show that it contributes to perceptual learning. We present a basic computational model that is built on fundamental aspects of the granule cells and their connections with the excitatory mitral cells, which convey the olfactory information to higher brain areas. We show that neurogenesis can reshape the network connectivity in response to olfactory input so as to reduce the correlations between the bulbar representations of even highly similar stimuli. The neurogenetic adaptation of the stimulus representations provides a natural explanation of the perceptual learning and the different response of young and old granule cells to novel odors that have been observed in experiments. The model makes experimentally testable predictions for training protocols that enhance the discriminability of odor mixtures.
How stable synchrony in neuronal networks is sustained in the presence of conduction delays is an open question. The Dynamic Clamp was used to measure phase resetting curves (PRCs) for entorhinal cortical cells, and then to construct networks of two such neurons. PRCs were in general Type I (all advances or all delays) or weakly type II with a small region at early phases with the opposite type of resetting. We used previously developed theoretical methods based on PRCs under the assumption of pulsatile coupling to predict the delays that synchronize these hybrid circuits. For excitatory coupling, synchrony was predicted and observed only with no delay and for delays greater than half a network period that cause each neuron to receive an input late in its firing cycle and almost immediately fire an action potential. Synchronization for these long delays was surprisingly tight and robust to the noise and heterogeneity inherent in a biological system. In contrast to excitatory coupling, inhibitory coupling led to antiphase for no delay, very short delays and delays close to a network period, but to near-synchrony for a wide range of relatively short delays. PRC-based methods show that conduction delays can stabilize synchrony in several ways, including neutralizing a discontinuity introduced by strong inhibition, favoring synchrony in the case of noisy bistability, and avoiding an initial destabilizing region of a weakly type II PRC. PRCs can identify optimal conduction delays favoring synchronization at a given frequency, and also predict robustness to noise and heterogeneity.
Individual oscillators, such as pendulum-based clocks and fireflies, can spontaneously organize into a coherent, synchronized entity with a common frequency. Neurons can oscillate under some circumstances, and can synchronize their firing both within and across brain regions. Synchronized assemblies of neurons are thought to underlie cognitive functions such as recognition, recall, perception and attention. Pathological synchrony can lead to epilepsy, tremor and other dynamical diseases, and synchronization is altered in most mental disorders. Biological neurons synchronize despite conduction delays, heterogeneous circuit composition, and noise. In biological experiments, we built simple networks in which two living neurons could interact via a computer in real time. The computer precisely controlled the nature of the connectivity and the length of the communication delays. We characterized the synchronization tendencies of individual, isolated oscillators by measuring how much a single input delivered by the computer transiently shortened or lengthened the cycle period of the oscillation. We then used this information to correctly predict the strong dependence of the coordination pattern of the firing of the component neurons on the length of the communication delays. Upon this foundation, we can begin to build a theory of the basic principles of synchronization in more complex brain circuits.
Short-term presynaptic plasticity designates variations of the amplitude of synaptic information transfer whereby the amount of neurotransmitter released upon presynaptic stimulation changes over seconds as a function of the neuronal firing activity. While a consensus has emerged that the resulting decrease (depression) and/or increase (facilitation) of the synapse strength are crucial to neuronal computations, their modes of expression in vivo remain unclear. Recent experimental studies have reported that glial cells, particularly astrocytes in the hippocampus, are able to modulate short-term plasticity but the mechanism of such a modulation is poorly understood. Here, we investigate the characteristics of short-term plasticity modulation by astrocytes using a biophysically realistic computational model. Mean-field analysis of the model, supported by intensive numerical simulations, unravels that astrocytes may mediate counterintuitive effects. Depending on the expressed presynaptic signaling pathways, astrocytes may globally inhibit or potentiate the synapse: the amount of released neurotransmitter in the presence of the astrocyte is transiently smaller or larger than in its absence. But this global effect usually coexists with the opposite local effect on paired pulses: with release-decreasing astrocytes most paired pulses become facilitated, namely the amount of neurotransmitter released upon spike i+1 is larger than that at spike i, while paired-pulse depression becomes prominent under release-increasing astrocytes. Moreover, we show that the frequency of astrocytic intracellular Ca2+ oscillations controls the effects of the astrocyte on short-term synaptic plasticity. Our model explains several experimental observations yet unsolved, and uncovers astrocytic gliotransmission as a possible transient switch between short-term paired-pulse depression and facilitation. This possibility has deep implications on the processing of neuronal spikes and resulting information transfer at synapses.
Synaptic plasticity is the capacity of a preexisting connection between two neurons to change in strength as a function of neuronal activity. Because it admittedly underlies learning and memory, the elucidation of its constituting mechanisms is of crucial importance in many aspects of normal and pathological brain function. Short-term presynaptic plasticity refers to changes occurring over short time scales (milliseconds to seconds) that are mediated by frequency-dependent modifications of the amount of neurotransmitter released by presynaptic stimulation. Recent experiments have reported that glial cells, especially hippocampal astrocytes, can modulate short-term plasticity, but the mechanism of such modulation is poorly understood. Here, we explore a plausible form of modulation of short-term plasticity by astrocytes using a biophysically realistic computational model. Our analysis indicates that astrocytes could simultaneously affect synaptic release in two ways. First, they either decrease or increase the overall synaptic release of neurotransmitter. Second, for stimuli that are delivered as pairs within short intervals, they systematically increase or decrease the synaptic response to the second one. Hence, our model suggests that astrocytes could transiently trigger switches between paired-pulse depression and facilitation. This property explains several challenging experimental observations and has a deep impact on our understanding of synaptic information transfer.
Gamma rhythms (30–100 Hz) are an extensively studied synchronous brain state responsible for a number of sensory, memory, and motor processes. Experimental evidence suggests that fast-spiking interneurons are responsible for carrying the high frequency components of the rhythm, while regular-spiking pyramidal neurons fire sparsely. We propose that a combination of spike frequency adaptation and global inhibition may be responsible for this behavior. Excitatory neurons form several clusters that fire every few cycles of the fast oscillation. This is first shown in a detailed biophysical network model and then analyzed thoroughly in an idealized model. We exploit the fact that the timescale of adaptation is much slower than that of the other variables. Singular perturbation theory is used to derive an approximate periodic solution for a single spiking unit. This is then used to predict the relationship between the number of clusters arising spontaneously in the network as it relates to the adaptation time constant. We compare this to a complementary analysis that employs a weak coupling assumption to predict the first Fourier mode to destabilize from the incoherent state of an associated phase model as the external noise is reduced. Both approaches predict the same scaling of cluster number with respect to the adaptation time constant, which is corroborated in numerical simulations of the full system. Thus, we develop several testable predictions regarding the formation and characteristics of gamma rhythms with sparsely firing excitatory neurons.
Fast periodic synchronized neural spiking corresponds to a variety of functions in many different areas of the brain. Most theories and experiments suggest inhibitory neurons carry the regular rhythm while being driven by excitatory neurons that spike more sparsely in time. We suggest a simple mechanism for the low firing rate of excitatory cells – spike frequency adaptation. Combining this mechanism with strong global inhibition causes excitatory neurons to group their firing into several clusters and, thus, produce a high frequency global rhythm. We study this phenomenon in both a detailed biophysical and an idealized model that preserves these two basic mechanisms. Using analytical tools from dynamical systems theory, we examine why adaptation causes clustering. In fact, we show the number of clusters relates to a simple function of the adaptation time scale over a broad range of parameters. This allows us to develop several predictions regarding the formation of fast spiking rhythms in the brain.
Previous studies have shown that neurons within the vestibular nuclei (VN) can faithfully encode the time course of sensory input through changes in firing rate in vivo. However, studies performed in vitro have shown that these same VN neurons often display nonlinear synchronization (i.e. phase locking) in their spiking activity to the local maxima of sensory input, thereby severely limiting their capacity for faithful encoding of said input through changes in firing rate. We investigated this apparent discrepancy by studying the effects of in vivo conditions on VN neuron activity in vitro using a simple, physiologically based, model of cellular dynamics. We found that membrane potential oscillations were evoked both in response to step and zap current injection for a wide range of channel conductance values. These oscillations gave rise to a resonance in the spiking activity that causes synchronization to sinusoidal current injection at frequencies below 25 Hz. We hypothesized that the apparent discrepancy between VN response dynamics measured in in vitro conditions (i.e., consistent with our modeling results) and the dynamics measured in vivo conditions could be explained by an increase in trial-to-trial variability under in vivo vs. in vitro conditions. Accordingly, we mimicked more physiologically realistic conditions in our model by introducing a noise current to match the levels of resting discharge variability seen in vivo as quantified by the coefficient of variation (CV). While low noise intensities corresponding to CV values in the range 0.04–0.24 only eliminated synchronization for low (<8 Hz) frequency stimulation but not high (>12 Hz) frequency stimulation, higher noise intensities corresponding to CV values in the range 0.5–0.7 almost completely eliminated synchronization for all frequencies. Our results thus predict that, under natural (i.e. in vivo) conditions, the vestibular system uses increased variability to promote fidelity of encoding by single neurons. This prediction can be tested experimentally in vitro.
The vestibular system senses the motion of the head in space and is vital for gaze stability, posture control, and the computation of spatial orientation during everyday life. The activities of single vestibular neurons recorded in the brains of awake behaving animals show that they can accurately transmit information about the time course of head motion, which is necessary for several behaviors such as the vestibulo-ocular reflex required for gaze stabilization. In contrast, this is not the case when the same neurons are recorded in isolation and sensory stimulation is mimicked experimentally. We investigated the cause for this discrepancy by studying how a mathematical model of vestibular neuron activity responds to mimics of sensory stimulation under different conditions. We found that the differences in the activities of vestibular neurons recorded in awake behaving animals and in isolation can be explained by the addition of synaptic noise, which in turn, increases the variability of action potential firing that is seen in more natural conditions. Our modeling results make a clear prediction that can be tested experimentally.