Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful.
Acquisition of responses towards full predictors of rewards, namely Pavlovian conditioning, has long been explained using the reinforcement learning theory. This theory formalizes learning processes that, by attributing values to situations and actions, makes it possible to direct behaviours towards rewarding objectives. Interestingly, the implied mechanisms rely on a reinforcement signal that parallels the activity of dopamine neurons in such experiments. However, recent studies challenged the classical view of explaining Pavlovian conditioning with a single process. When presented with a lever whose retraction preceded the delivery of food, some rats started to chew and bite the food magazine whereas others chew and bite the lever, even if no interactions were necessary to get the food. These differences were also visible in brain activity and when tested with drugs, suggesting the coexistence of multiple systems. We present a computational model that extends the classical theory to account for these data. Interestingly, we can draw predictions from this model that may be experimentally verified. Inspired by mechanisms used to model instrumental behaviours, where actions are required to get rewards, and advanced Pavlovian behaviours (such as overexpectation, negative patterning), it offers an entry point to start modelling the strong interactions observed between them.
Neuronal signal integration and information processing in cortical neuronal networks critically depend on the organization of synaptic connectivity. Because of the challenges involved in measuring a large number of neurons, synaptic connectivity is difficult to determine experimentally. Current computational methods for estimating connectivity typically rely on the juxtaposition of experimentally available neurons and applying mathematical techniques to compute estimates of neural connectivity. However, since the number of available neurons is very limited, these connectivity estimates may be subject to large uncertainties. We use a morpho-density field approach applied to a vast ensemble of model-generated neurons. A morpho-density field (MDF) describes the distribution of neural mass in the space around the neural soma. The estimated axonal and dendritic MDFs are derived from 100,000 model neurons that are generated by a stochastic phenomenological model of neurite outgrowth. These MDFs are then used to estimate the connectivity between pairs of neurons as a function of their inter-soma displacement. Compared with other density-field methods, our approach to estimating synaptic connectivity uses fewer restricting assumptions and produces connectivity estimates with a lower standard deviation. An important requirement is that the model-generated neurons reflect accurately the morphology and variation in morphology of the experimental neurons used for optimizing the model parameters. As such, the method remains subject to the uncertainties caused by the limited number of neurons in the experimental data set and by the quality of the model and the assumptions used in creating the MDFs and in calculating estimating connectivity. In summary, MDFs are a powerful tool for visualizing the spatial distribution of axonal and dendritic densities, for estimating the number of potential synapses between neurons with low standard deviation, and for obtaining a greater understanding of the relationship between neural morphology and network connectivity.
The aging brain shows a progressive loss of neuropil, which is accompanied by subtle changes in neuronal plasticity, sensory learning and memory. Neurophysiologically, aging attenuates evoked responses—including the mismatch negativity (MMN). This is accompanied by a shift in cortical responsivity from sensory (posterior) regions to executive (anterior) regions, which has been interpreted as a compensatory response for cognitive decline. Theoretical neurobiology offers a simpler explanation for all of these effects—from a Bayesian perspective, as the brain is progressively optimized to model its world, its complexity will decrease. A corollary of this complexity reduction is an attenuation of Bayesian updating or sensory learning. Here we confirmed this hypothesis using magnetoencephalographic recordings of the mismatch negativity elicited in a large cohort of human subjects, in their third to ninth decade. Employing dynamic causal modeling to assay the synaptic mechanisms underlying these non-invasive recordings, we found a selective age-related attenuation of synaptic connectivity changes that underpin rapid sensory learning. In contrast, baseline synaptic connectivity strengths were consistently strong over the decades. Our findings suggest that the lifetime accrual of sensory experience optimizes functional brain architectures to enable efficient and generalizable predictions of the world.
While studies of aging are widely framed in terms of their demarcation of degenerative processes, the brain provides a unique opportunity to uncover the adaptive effects of getting older. Though intuitively reasonable, that life-experience and wisdom should reside somewhere in human cortex, these features have eluded neuroscientific explanation. The present study utilizes a “Bayesian Brain” framework to motivate an analysis of cortical circuit processing. From a Bayesian perspective, the brain represents a model of its environment and offers predictions about the world, while responding, through changing synaptic strengths to novel interactions and experiences. We hypothesized that these predictive and updating processes are modified as we age, representing an optimization of neuronal architecture. Using novel sensory stimuli we demonstrate that synaptic connections of older brains resist trial by trial learning to provide a robust model of their sensory environment. These older brains are capable of processing a wider range of sensory inputs – representing experienced generalists. We thus explain how, contrary to a singularly degenerative point-of-view, aging neurobiological effects may be understood, in sanguine terms, as adaptive and useful.
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
As in electronics, many of the brain's neural circuits convert continuous time signals into a discrete-time binary code. Although some neurons use only graded voltage signals, most convert these signals into discrete-time action potentials. Yet the costs and benefits associated with such a switch in signalling mechanism are largely unexplored. We investigate why the conversion of graded potentials to action potentials is accompanied by substantial information loss and how this changes energy efficiency. Action potentials are generated by a large cohort of noisy Na+ channels. We show that this channel noise and the added non-linearity of Na+ channels destroy input information provided by graded generator potentials. Furthermore, action potentials themselves cause information loss due to their finite widths because the neuron is oblivious to the input that is arriving during an action potential. Consequently, neurons with high firing rates lose a large amount of the information in their inputs. The additional cost incurred by voltage-gated Na+ channels also means that action potentials can encode less information per unit energy, proving metabolically inefficient, and suggesting penalisation of high firing rates in the nervous system.
Higher general intelligence attenuates age-associated cognitive decline and the risk of dementia. Thus, intelligence has been associated with cognitive reserve or resilience in normal aging. Neurophysiologically, intelligence is considered as a complex capacity that is dependent on a global cognitive network rather than isolated brain areas. An association of structural as well as functional brain network characteristics with intelligence has already been reported in young adults. We investigated the relationship between global structural brain network properties, general intelligence and age in a group of 43 cognitively healthy elderly, age 60–85 years. Individuals were assessed cross-sectionally using Wechsler Adult Intelligence Scale-Revised (WAIS-R) and diffusion-tensor imaging. Structural brain networks were reconstructed individually using deterministic tractography, global network properties (global efficiency, mean shortest path length, and clustering coefficient) were determined by graph theory and correlated to intelligence scores within both age groups. Network properties were significantly correlated to age, whereas no significant correlation to WAIS-R was observed. However, in a subgroup of 15 individuals aged 75 and above, the network properties were significantly correlated to WAIS-R. Our findings suggest that general intelligence and global properties of structural brain networks may not be generally associated in cognitively healthy elderly. However, we provide first evidence of an association between global structural brain network properties and general intelligence in advanced elderly. Intelligence might be affected by age-associated network deterioration only if a certain threshold of structural degeneration is exceeded. Thus, age-associated brain structural changes seem to be partially compensated by the network and the range of this compensation might be a surrogate of cognitive reserve or brain resilience.
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.
The co-occurrence of action potentials of pairs of neurons within short time intervals has been known for a long time. Such synchronous events can appear time-locked to the behavior of an animal, and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently, the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover, the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.
The role intrinsic statistical fluctuations play in creating avalanches – patterns of complex bursting activity with scale-free properties – is examined in leaky Markovian networks. Using this broad class of models, we develop a probabilistic approach that employs a potential energy landscape perspective coupled with a macroscopic description based on statistical thermodynamics. We identify six important thermodynamic quantities essential for characterizing system behavior as a function of network size: the internal potential energy, entropy, free potential energy, internal pressure, pressure, and bulk modulus. In agreement with classical phase transitions, these quantities evolve smoothly as a function of the network size until a critical value is reached. At that value, a discontinuity in pressure is observed that leads to a spike in the bulk modulus demarcating loss of thermodynamic robustness. We attribute this novel result to a reallocation of the ground states (global minima) of the system's stationary potential energy landscape caused by a noise-induced deformation of its topographic surface. Further analysis demonstrates that appreciable levels of intrinsic noise can cause avalanching, a complex mode of operation that dominates system dynamics at near-critical or subcritical network sizes. Illustrative examples are provided using an epidemiological model of bacterial infection, where avalanching has not been characterized before, and a previously studied model of computational neuroscience, where avalanching was erroneously attributed to specific neural architectures. The general methods developed here can be used to study the emergence of avalanching (and other complex phenomena) in many biological, physical and man-made interaction networks.
Networks of noisy interacting components arise in diverse scientific disciplines. Here, we develop a mathematical framework to study the underlying causes of a bursting phenomenon in network activity known as avalanching. As prototypical examples, we study a model of disease spreading in a population of individuals and a model of brain activity in a neural network. Although avalanching is well-documented in neural networks, thought to be crucial for learning, information processing, and memory, it has not been studied before in disease spreading. We employ tools originally used to analyze thermodynamic systems to argue that randomness in the actions of individual network components plays a fundamental role in avalanche formation. We show that avalanching is a spontaneous behavior, brought about by a phenomenon reminiscent to a phase transition in statistical mechanics, caused by increasing randomness as the network size decreases. Our work demonstrates that a previously suggested balanced feed-forward network structure is not necessary for neuronal avalanching. Instead, we attribute avalanching to a reallocation of the global minima of the network's stationary potential energy landscape, caused by a noise-induced deformation of its topographic surface.
The complex connectivity of the cerebral cortex suggests that inter-regional communication is a primary function. Using computational modeling, we show that anatomical connectivity may be a major determinant for global information flow in brain networks. A macaque brain network was implemented as a communication network in which signal units flowed between grey matter nodes along white matter paths. Compared to degree-matched surrogate networks, information flow on the macaque brain network was characterized by higher loss rates, faster transit times and lower throughput, suggesting that neural connectivity may be optimized for speed rather than fidelity. Much of global communication was mediated by a “rich club” of hub regions: a sub-graph comprised of high-degree nodes that are more densely interconnected with each other than predicted by chance. First, macaque communication patterns most closely resembled those observed for a synthetic rich club network, but were less similar to those seen in a synthetic small world network, suggesting that the former is a more fundamental feature of brain network topology. Second, rich club regions attracted the most signal traffic and likewise, connections between rich club regions carried more traffic than connections between non-rich club regions. Third, a number of rich club regions were significantly under-congested, suggesting that macaque connectivity actively shapes information flow, funneling traffic towards some nodes and away from others. Together, our results indicate a critical role of the rich club of hub nodes in dynamic aspects of global brain communication.
A fundamental question in systems neuroscience is how the structural connectivity of the cerebral cortex shapes global communication. Here, using computational modeling in conjunction with an anatomically realistic structural network, we show that cortico-cortical communication is constrained by high-level features of brain network topology. We find that neural network topology is configured in a way that prioritizes speed of information flow over reliability and total throughput. The defining characteristic of the information processing architecture of the network is a densely interconnected rich club of hub nodes. Namely, rich club nodes and connections between rich club nodes absorb the greatest proportion of total signal traffic. In addition, rich club connectivity appears to actively shape information flow, whereby signal traffic is biased towards some nodes and away from others. Finally, synthetic networks containing a rich club could almost perfectly reproduce the information flow patterns of the real anatomical network. Altogether, our data demonstrate that a central collective of highly interconnected hubs serves to facilitate cortico-cortical communication. By simulating communication on a static structural network we have revealed a dynamic aspect of the global information processing architecture and the critical role played by the rich club of hub nodes.
Whether, when, how, and why increased complexity evolves in biological populations is a longstanding open question. In this work we combine a recently developed method for evolving virtual organisms with an information-theoretic metric of morphological complexity in order to investigate how the complexity of morphologies, which are evolved for locomotion, varies across different environments. We first demonstrate that selection for locomotion results in the evolution of organisms with morphologies that increase in complexity over evolutionary time beyond what would be expected due to random chance. This provides evidence that the increase in complexity observed is a result of a driven rather than a passive trend. In subsequent experiments we demonstrate that morphologies having greater complexity evolve in complex environments, when compared to a simple environment when a cost of complexity is imposed. This suggests that in some niches, evolution may act to complexify the body plans of organisms while in other niches selection favors simpler body plans.
The evolution of complexity, a central issue of evolutionary theory since Darwin's time, remains a controversial topic. One particular question of interest is how the complexity of an organism's body plan (morphology) is influenced by the complexity of the environment in which it evolved. Ideally, it would be desirable to perform investigations on living organisms in which environmental complexity is under experimental control, but our ability to do so in a limited timespan and in a controlled manner is severely constrained. In lieu of such studies, here we employ computer simulations capable of evolving the body plans of virtual organisms to investigate this question in silico. By evolving virtual organisms for locomotion in a variety of environments, we are able to demonstrate that selecting for locomotion causes more complex morphologies to evolve than would be expected solely due to random chance. Moreover, if increased complexity incurs a cost (as it is thought to do in biology), then more complex environments tend to lead to the evolution of more complex body plans than those that evolve in a simpler environment. This result supports the idea that the morphological complexity of organisms is influenced by the complexity of the environments in which they evolve.
Sensory information is encoded in the response of neuronal populations. How might this information be decoded by downstream neurons? Here we analyzed the responses of simultaneously recorded barrel cortex neurons to sinusoidal vibrations of varying amplitudes preceded by three adapting stimuli of 0, 6 and 12 µm in amplitude. Using the framework of signal detection theory, we quantified the performance of a linear decoder which sums the responses of neurons after applying an optimum set of weights. Optimum weights were found by the analytical solution that maximized the average signal-to-noise ratio based on Fisher linear discriminant analysis. This provided a biologically plausible decoder that took into account the neuronal variability, covariability, and signal correlations. The optimal decoder achieved consistent improvement in discrimination performance over simple pooling. Decorrelating neuronal responses by trial shuffling revealed that, unlike pooling, the performance of the optimal decoder was minimally affected by noise correlation. In the non-adapted state, noise correlation enhanced the performance of the optimal decoder for some populations. Under adaptation, however, noise correlation always degraded the performance of the optimal decoder. Nonetheless, sensory adaptation improved the performance of the optimal decoder mainly by increasing signal correlation more than noise correlation. Adaptation induced little systematic change in the relative direction of signal and noise. Thus, a decoder which was optimized under the non-adapted state generalized well across states of adaptation.
In the natural environment, animals are constantly exposed to sensory stimulation. A key question in systems neuroscience is how attributes of a sensory stimulus can be “read out” from the activity of a population of brain cells. We chose to investigate this question in the whisker-mediated touch system of rats because of its well-established anatomy and exquisite functionality. The whisker system is one of the major channels through which rodents acquire sensory information about their surrounding environment. The response properties of brain cells dynamically adjust to the prevailing diet of sensory stimulation, a process termed sensory adaptation. Here, we applied a biologically plausible scheme whereby different brain cells contribute to sensory readout with different weights. We established the set of weights that provide the optimal readout under different states of adaptation. The results yield an upper bound for the efficiency of coding sensory information. We found that the ability to decode sensory information improves with adaptation. However, a readout mechanism that does not adjust to the state of adaptation can still perform remarkably well.
Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such “K-pairwise” models—being systematic extensions of the previously used pairwise Ising models—provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
Sensory neurons encode information about the world into sequences of spiking and silence. Multi-electrode array recordings have enabled us to move from single units to measuring the responses of many neurons simultaneously, and thus to ask questions about how populations of neurons as a whole represent their input signals. Here we build on previous work that has shown that in the salamander retina, pairs of retinal ganglion cells are only weakly correlated, yet the population spiking activity exhibits large departures from a model where the neurons would be independent. We analyze data from more than a hundred salamander retinal ganglion cells and characterize their collective response using maximum entropy models of statistical physics. With these models in hand, we can put bounds on the amount of information encoded by the neural population, constructively demonstrate that the code has error correcting redundancy, and advance two hypotheses about the neural code: that collective states of the network could carry stimulus information, and that the distribution of neural activity patterns has very nontrivial statistical properties, possibly related to critical systems in statistical physics.
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex.
How does the human brain analyze natural sounds? Previous functional neuroimaging research could only describe the response patterns that sounds evoke in the human brain at the level of preferential regional activations. A comprehensive account of the neural basis of human hearing, however, requires deriving computational models that are able to provide quantitative predictions of brain responses to natural sounds. Here, we make a significant step in this direction by combining functional magnetic resonance imaging (fMRI) with computational modeling. We compare competing computational models of sound representations and select the model that most accurately predicts the measured fMRI response patterns. The computational models describe the processing of three relevant properties of natural sounds: frequency, temporal modulations and spectral modulations. We find that a model that represents spectral and temporal modulations jointly and in a frequency-dependent fashion provides the best account of fMRI responses and that the functional specialization of auditory cortical fields can be partially accounted for by their modulation tuning. Our results provide insights on how natural sounds are encoded in human auditory cortex and our methodological approach constitutes an advance in the way this question can be addressed in future studies.
The cable equation is a proper framework for modeling electrical neural signalling that takes place at a timescale at which the ionic concentrations vary little. However, in neural tissue there are also key dynamic processes that occur at longer timescales. For example, endured periods of intense neural signaling may cause the local extracellular K+-concentration to increase by several millimolars. The clearance of this excess K+ depends partly on diffusion in the extracellular space, partly on local uptake by astrocytes, and partly on intracellular transport (spatial buffering) within astrocytes. These processes, that take place at the time scale of seconds, demand a mathematical description able to account for the spatiotemporal variations in ion concentrations as well as the subsequent effects of these variations on the membrane potential. Here, we present a general electrodiffusive formalism for modeling of ion concentration dynamics in a one-dimensional geometry, including both the intra- and extracellular domains. Based on the Nernst-Planck equations, this formalism ensures that the membrane potential and ion concentrations are in consistency, it ensures global particle/charge conservation and it accounts for diffusion and concentration dependent variations in resistivity. We apply the formalism to a model of astrocytes exchanging ions with the extracellular space. The simulations show that K+-removal from high-concentration regions is driven by a local depolarization of the astrocyte membrane, which concertedly (i) increases the local astrocytic uptake of K+, (ii) suppresses extracellular transport of K+, (iii) increases axial transport of K+ within astrocytes, and (iv) facilitates astrocytic relase of K+ in regions where the extracellular concentration is low. Together, these mechanisms seem to provide a robust regulatory scheme for shielding the extracellular space from excess K+.
When neurons generate electrical signals they release potassium ions (K+) into the extracellular space. During periods of intense neural activity, the local extracellular K+ may increase drastically. If it becomes too high, it can lead to neural dysfunction. Astrocytes (a kind of glial cells) are involved in preventing this from happening. Astrocytes can take up excess K+, transport it intracellularly, and release it in regions where the concentration is lower. This process is called spatial buffering, and a full mechanistic understanding of it is currently lacking. The aim of this work is twofold: First, we develop a formalism for modeling ion concentration dynamics in the intra- and extracellular space. The formalism is general, and could be used to simulate many cellular processes. It accounts for ion transports due to diffusion (along concentration gradients) as well as electrical migration (along voltage gradients). It extends previous, related formalisms, which have focused only on intracellular dynamics. Secondly, we apply the formalism to model how astrocytes exchange ions with the extracellular space. We conclude that the membrane mechanisms possessed by astrocytes seem optimal for shielding the extracellular space from excess K+, and provide a full mechanistic description of the spatial (K+) buffering process.
Standard theories of decision-making involving delayed outcomes predict that people should defer a punishment, whilst advancing a reward. In some cases, such as pain, people seem to prefer to expedite punishment, implying that its anticipation carries a cost, often conceptualized as ‘dread’. Despite empirical support for the existence of dread, whether and how it depends on prospective delay is unknown. Furthermore, it is unclear whether dread represents a stable component of value, or is modulated by biases such as framing effects. Here, we examine choices made between different numbers of painful shocks to be delivered faithfully at different time points up to 15 minutes in the future, as well as choices between hypothetical painful dental appointments at time points of up to approximately eight months in the future, to test alternative models for how future pain is disvalued. We show that future pain initially becomes increasingly aversive with increasing delay, but does so at a decreasing rate. This is consistent with a value model in which moment-by-moment dread increases up to the time of expected pain, such that dread becomes equivalent to the discounted expectation of pain. For a minority of individuals pain has maximum negative value at intermediate delay, suggesting that the dread function may itself be prospectively discounted in time. Framing an outcome as relief reduces the overall preference to expedite pain, which can be parameterized by reducing the rate of the dread-discounting function. Our data support an account of disvaluation for primary punishments such as pain, which differs fundamentally from existing models applied to financial punishments, in which dread exerts a powerful but time-dependent influence over choice.
People often prefer to ‘get pain out of the way’, treating pain in the future as more significant than pain now. One explanation, termed ‘dread’, is that anticipating pain is unpleasant or disadvantageous, rather like pain itself. Human brain imaging studies support the existence of dread, though it is unknown whether and how dread depends on the timing of future pain. We address this question by offering people decisions between moderately painful stimuli, and separately between imagined painful dental appointments occurring at different time points in the future, and use their choices to estimate dread. We show that future pain initially becomes more unpleasant when it is delayed, but as pain is moved further into the future, the effect of delay decreases. This is consistent with dread increasing as anticipated pain draws nearer, which is then combined with a general (and opposing) tendency to down-weight the significance of future events. We also show that dread can be attenuated by describing pain in terms of relief from an imagined even more severe pain. These observations reveal important principles about how people estimate the value of anticipated pain – relevant to a diverse range of human emotion and behavior.
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.
The brain has not only the capability to process sensory input, but it can also produce predictions, imaginations, and solve problems that combine learned knowledge with information about a new scenario. But although these more complex information processing capabilities lie at the heart of human intelligence, we still do not know how they are organized and implemented in the brain. Numerous studies in cognitive science and neuroscience conclude that many of these processes involve probabilistic inference. This suggests that neuronal circuits in the brain process information in the form of probability distributions, but we are missing insight into how complex distributions could be represented and stored in large and diverse networks of neurons in the brain. We prove in this article that realistic cortical microcircuit models can store complex probabilistic knowledge by embodying probability distributions in their inherent stochastic dynamics – yielding a knowledge representation in which typical probabilistic inference problems such as marginalization become straightforward readout tasks. We show that in cortical microcircuit models such computations can be performed satisfactorily within a few . Furthermore, we demonstrate how internally stored distributions can be programmed in a simple manner to endow a neural circuit with powerful problem solving capabilities.
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.
Two observations about the cortex have puzzled and fascinated neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks representing information reliably and with a small number of spikes. To achieve such efficiency, spikes of individual neurons must communicate prediction errors about a common population-level signal, automatically resulting in balanced excitation and inhibition and highly variable neural responses. We illustrate our approach by focusing on the implementation of linear dynamical systems. Among other things, this allows us to construct a network of spiking neurons that can integrate input signals, yet is robust against many perturbations. Most importantly, our approach shows that neural variability cannot be equated to noise. Despite exhibiting the same single unit properties as other widely used network models, our balanced networks are orders of magnitudes more reliable. Our results suggest that the precision of cortical representations has been strongly underestimated.
It is well known that even under identical task conditions, there is a tremendous amount of trial-to-trial variability in both brain activity and behavioral output. Thus far the vast majority of event-related potential (ERP) studies investigating the relationship between trial-to-trial fluctuations in brain activity and behavioral performance have only tested a monotonic relationship between them. However, it was recently found that across-trial variability can correlate with behavioral performance independent of trial-averaged activity. This finding predicts a U- or inverted-U- shaped relationship between trial-to-trial brain activity and behavioral output, depending on whether larger brain variability is associated with better or worse behavior, respectively. Using a visual stimulus detection task, we provide evidence from human electrocorticography (ECoG) for an inverted-U brain-behavior relationship: When the raw fluctuation in broadband ECoG activity is closer to the across-trial mean, hit rate is higher and reaction times faster. Importantly, we show that this relationship is present not only in the post-stimulus task-evoked brain activity, but also in the pre-stimulus spontaneous brain activity, suggesting anticipatory brain dynamics. Our findings are consistent with the presence of stochastic noise in the brain. They further support attractor network theories, which postulate that the brain settles into a more confined state space under task performance, and proximity to the targeted trajectory is associated with better performance.
The human brain is notoriously “noisy”. Even with identical physical sensory inputs and task demands, brain responses and behavioral output vary tremendously from trial to trial. Such brain and behavioral variability and the relationship between them have been the focus of intense neuroscience research for decades. Traditionally, it is thought that the relationship between trial-to-trial brain activity and behavioral performance is monotonic: the highest or lowest brain activity levels are associated with the best behavioral performance. Using invasive recordings in neurosurgical patients, we demonstrate an inverted-U relationship between brain and behavioral variability. Under such a relationship, moderate brain activity is associated with the best performance, while both very low and very high brain activity levels are predictive of compromised performance. These results have significant implications for our understanding of brain functioning. They further support recent theoretical frameworks that view the brain as an active nonlinear dynamical system instead of a passive signal-processing device.
The mismatch negativity (MMN) is an event related potential evoked by violations of regularity. Here, we present a model of the underlying neuronal dynamics based upon the idea that auditory cortex continuously updates a generative model to predict its sensory inputs. The MMN is then modelled as the superposition of the electric fields evoked by neuronal activity reporting prediction errors. The process by which auditory cortex generates predictions and resolves prediction errors was simulated using generalised (Bayesian) filtering – a biologically plausible scheme for probabilistic inference on the hidden states of hierarchical dynamical models. The resulting scheme generates realistic MMN waveforms, explains the qualitative effects of deviant probability and magnitude on the MMN – in terms of latency and amplitude – and makes quantitative predictions about the interactions between deviant probability and magnitude. This work advances a formal understanding of the MMN and – more generally – illustrates the potential for developing computationally informed dynamic causal models of empirical electromagnetic responses.
Computational neuroimaging enables quantitative inferences from non-invasive measures of brain activity on the underlying mechanisms. Ultimately, we would like to understand these mechanisms not only in terms of physiology but also in terms of computation. So far, this has not been addressed by mathematical models of neuroimaging data (e.g., dynamic causal models), which have rather focused on ever more detailed inferences about physiology. Here we present the first instance of a dynamic causal model that explains electrophysiological data in terms of computation rather than physiology. Concretely, we predict the mismatch negativity – an event-related potential elicited by regularity violation – from the dynamics of perceptual inference as prescribed by the free energy principle. The resulting model explains the waveform of the mismatch negativity and some of its phenomenological properties at a level of precision that has not been attempted before. This highlights the potential of neurocomputational dynamic causal models to enable inferences from neuroimaging data on neurocomputational mechanisms.
Medically-induced coma is a drug-induced state of profound brain inactivation and unconsciousness used to treat refractory intracranial hypertension and to manage treatment-resistant epilepsy. The state of coma is achieved by continually monitoring the patient's brain activity with an electroencephalogram (EEG) and manually titrating the anesthetic infusion rate to maintain a specified level of burst suppression, an EEG marker of profound brain inactivation in which bursts of electrical activity alternate with periods of quiescence or suppression. The medical coma is often required for several days. A more rational approach would be to implement a brain-machine interface (BMI) that monitors the EEG and adjusts the anesthetic infusion rate in real time to maintain the specified target level of burst suppression. We used a stochastic control framework to develop a BMI to control medically-induced coma in a rodent model. The BMI controlled an EEG-guided closed-loop infusion of the anesthetic propofol to maintain precisely specified dynamic target levels of burst suppression. We used as the control signal the burst suppression probability (BSP), the brain's instantaneous probability of being in the suppressed state. We characterized the EEG response to propofol using a two-dimensional linear compartment model and estimated the model parameters specific to each animal prior to initiating control. We derived a recursive Bayesian binary filter algorithm to compute the BSP from the EEG and controllers using a linear-quadratic-regulator and a model-predictive control strategy. Both controllers used the estimated BSP as feedback. The BMI accurately controlled burst suppression in individual rodents across dynamic target trajectories, and enabled prompt transitions between target levels while avoiding both undershoot and overshoot. The median performance error for the BMI was 3.6%, the median bias was -1.4% and the overall posterior probability of reliable control was 1 (95% Bayesian credibility interval of [0.87, 1.0]). A BMI can maintain reliable and accurate real-time control of medically-induced coma in a rodent model suggesting this strategy could be applied in patient care.
Brain-machine interfaces (BMI) for closed-loop control of anesthesia have the potential to enable fully automated and precise control of brain states in patients requiring anesthesia care. Medically-induced coma is one such drug-induced state in which the brain is profoundly inactivated and unconscious and the electroencephalogram (EEG) pattern consists of bursts of electrical activity alternating with periods of suppression, termed burst suppression. Medical coma is induced to treat refractory intracranial hypertension and uncontrollable seizures. The state of coma is often required for days, making accurate manual control infeasible. We develop a BMI that can automatically and precisely control the level of burst suppression in real time in individual rodents. The BMI consists of novel estimation and control algorithms that take as input the EEG activity, estimate the burst suppression level based on this activity, and use this estimate as feedback to control the drug infusion rate in real time. The BMI maintains precise control and promptly changes the level of burst suppression while avoiding overshoot or undershoot. Our work demonstrates the feasibility of automatic reliable and accurate control of medical coma that can provide considerable therapeutic benefits.
The brain must dynamically integrate, coordinate, and respond to internal and external stimuli across multiple time scales. Non-invasive measurements of brain activity with fMRI have greatly advanced our understanding of the large-scale functional organization supporting these fundamental features of brain function. Conclusions from previous resting-state fMRI investigations were based upon static descriptions of functional connectivity (FC), and only recently studies have begun to capitalize on the wealth of information contained within the temporal features of spontaneous BOLD FC. Emerging evidence suggests that dynamic FC metrics may index changes in macroscopic neural activity patterns underlying critical aspects of cognition and behavior, though limitations with regard to analysis and interpretation remain. Here, we review recent findings, methodological considerations, neural and behavioral correlates, and future directions in the emerging field of dynamic FC investigations.
Functional connectivity; Resting state; Dynamics; Spontaneous activity; Functional MRI (fMRI); Fluctuations
The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations.
Neurons communicate via action potentials. Typically, depolarizations caused by presynaptic firing are small, such that many synaptic inputs are necessary to exceed the firing threshold. This is the assumption made by standard mathematical approaches such as the Fokker-Planck formalism. However, in some cases the synaptic weight can be large. On occasion, a single input is capable of exceeding threshold. Although this phenomenon can be studied with computational simulations, these can be impractical for large scale brain simulations or suffer from the problem of insufficient knowledge of the relevant parameters. Improving upon the standard Fokker-Planck approach, we develop a hybrid approach combining semi-analytical with computational methods into an efficient technique for analyzing the effect that rare and large synaptic weights can have on neural network activity. Our method has both neurobiological as well as methodological implications. Sparse but powerful synapses provide networks with response celerity, enhanced bandwidth and stability, even when the networks are matched for average input. We introduce a measure characterizing this response. Furthermore, our method can characterize the sub-threshold membrane potential distribution and spiking statistics of very large networks of distinct but homogeneous populations of 10s to 100s of distinct neuronal cell types throughout the brain.
We show, for the first time, that in cortical areas, for example the insular, orbitofrontal, and lateral prefrontal cortex, there is signal-dependent noise in the fMRI blood-oxygen level dependent (BOLD) time series, with the variance of the noise increasing approximately linearly with the square of the signal. Classical Granger causal models are based on autoregressive models with time invariant covariance structure, and thus do not take this signal-dependent noise into account. To address this limitation, here we describe a Granger causal model with signal-dependent noise, and a novel, likelihood ratio test for causal inferences. We apply this approach to the data from an fMRI study to investigate the source of the top-down attentional control of taste intensity and taste pleasantness processing. The Granger causality with signal-dependent noise analysis reveals effects not identified by classical Granger causal analysis. In particular, there is a top-down effect from the posterior lateral prefrontal cortex to the insular taste cortex during attention to intensity but not to pleasantness, and there is a top-down effect from the anterior and posterior lateral prefrontal cortex to the orbitofrontal cortex during attention to pleasantness but not to intensity. In addition, there is stronger forward effective connectivity from the insular taste cortex to the orbitofrontal cortex during attention to pleasantness than during attention to intensity. These findings indicate the importance of explicitly modeling signal-dependent noise in functional neuroimaging, and reveal some of the processes involved in a biased activation theory of selective attention.
We show that in cortical areas such as the insular, orbitofrontal, and lateral prefrontal cortex, the variation of the blood-oxygen level dependent (BOLD) time series across trials measured with functional magnetic resonance imaging (fMRI) increases with the magnitude of the signal. We describe a new method of measuring causal effects with Granger causality that takes into account this signal-dependent noise. We show in a functional neuroimaging investigation with the new method that there is a causal influence from the anterior lateral prefrontal cortex that during attention to the pleasantness of taste stimuli increases the response of the orbitofrontal cortex to the taste; and there is a causal influence from the posterior lateral prefrontal cortex to the insular taste cortex during attention to the intensity of taste stimuli. This shows how part of the circuitry involved in the effects of selective attention on the pleasantness and intensity of stimuli operates in the brain.
As a person learns a new skill, distinct synapses, brain regions, and circuits are engaged and change over time. In this paper, we develop methods to examine patterns of correlated activity across a large set of brain regions. Our goal is to identify properties that enable robust learning of a motor skill. We measure brain activity during motor sequencing and characterize network properties based on coherent activity between brain regions. Using recently developed algorithms to detect time-evolving communities, we find that the complex reconfiguration patterns of the brain's putative functional modules that control learning can be described parsimoniously by the combined presence of a relatively stiff temporal core that is composed primarily of sensorimotor and visual regions whose connectivity changes little in time and a flexible temporal periphery that is composed primarily of multimodal association regions whose connectivity changes frequently. The separation between temporal core and periphery changes over the course of training and, importantly, is a good predictor of individual differences in learning success. The core of dynamically stiff regions exhibits dense connectivity, which is consistent with notions of core-periphery organization established previously in social networks. Our results demonstrate that core-periphery organization provides an insightful way to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
When someone learns a new skill, his/her brain dynamically alters individual synapses, regional activity, and larger-scale circuits. In this paper, we capture some of these dynamics by measuring and characterizing patterns of coherent brain activity during the learning of a motor skill. We extract time-evolving communities from these patterns and find that a temporal core that is composed primarily of primary sensorimotor and visual regions reconfigures little over time, whereas a periphery that is composed primarily of multimodal association regions reconfigures frequently. The core consists of densely connected nodes, and the periphery consists of sparsely connected nodes. Individual participants with a larger separation between core and periphery learn better in subsequent training sessions than individuals with a smaller separation. Conceptually, core-periphery organization provides a framework in which to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
The transcriptome of the brain changes during development, reflecting processes that determine functional specialization of brain regions. We analyzed gene expression, measured using in situ hybridization across the full developing mouse brain, to quantify functional specialization of brain regions. Surprisingly, we found that during the time that the brain becomes anatomically regionalized in early development, transcription specialization actually decreases reaching a low, “neurotypic”, point around birth. This decrease of specialization is brain-wide, and mainly due to biological processes involved in constructing brain circuitry. Regional specialization rises again during post-natal development. This effect is largely due to specialization of plasticity and neural activity processes. Post-natal specialization is particularly significant in the cerebellum, whose expression signature becomes increasingly different from other brain regions. When comparing mouse and human expression patterns, the cerebellar post-natal specialization is also observed in human, but the regionalization of expression in the human Thalamus and Cortex follows a strikingly different profile than in mouse.
Brain development is one of the most complex biological processes, orchestrated by the precisely timed and coordinated expression of thousands of genes. As the brain develops, specific regions are formed, their structure and function reflected in unique sets of expressed genes. Regional gene expression profiles determine the basic properties of neural systems: controlling how the brain develops from embryos to adults, maintaining the well being of the system, adapting the brain following experience and carrying out specific regional functions. Here we investigate the temporal dynamics of changes in regional gene expression patterns throughout mouse brain development. We identify a neurotypic phase around the time of birth, in which patterns of gene expression become more homogeneous across the brain, creating an ‘hourglass’ shaped expression divergence profile. We characterize the biological processes, genes and brain regions responsible for this pattern, and also compare mouse neurodevelopmental expression patterns with parallel data from human, finding striking similarities and differences between the two species.
Humans interact with the environment through sensory and motor acts. Some of these interactions require synchronization among two or more individuals. Multiple-trial designs, which we have used in past work to study interbrain synchronization in the course of joint action, constrain the range of observable interactions. To overcome the limitations of multiple-trial designs, we conducted single-trial analyses of electroencephalography (EEG) signals recorded from eight pairs of guitarists engaged in musical improvisation. We identified hyper-brain networks based on a complex interplay of different frequencies. The intra-brain connections primarily involved higher frequencies (e.g., beta), whereas inter-brain connections primarily operated at lower frequencies (e.g., delta and theta). The topology of hyper-brain networks was frequency-dependent, with a tendency to become more regular at higher frequencies. We also found hyper-brain modules that included nodes (i.e., EEG electrodes) from both brains. Some of the observed network properties were related to musical roles during improvisation. Our findings replicate and extend earlier work and point to mechanisms that enable individuals to engage in temporally coordinated joint action.