Search tips
Search criteria

Results 1-25 (328)

Clipboard (0)

Select a Filter Below

more »
Year of Publication
more »
author:("spores, Olaf")
1.  Influence of Wiring Cost on the Large-Scale Architecture of Human Cortical Connectivity 
PLoS Computational Biology  2014;10(4):e1003557.
In the past two decades some fundamental properties of cortical connectivity have been discovered: small-world structure, pronounced hierarchical and modular organisation, and strong core and rich-club structures. A common assumption when interpreting results of this kind is that the observed structural properties are present to enable the brain's function. However, the brain is also embedded into the limited space of the skull and its wiring has associated developmental and metabolic costs. These basic physical and economic aspects place separate, often conflicting, constraints on the brain's connectivity, which must be characterized in order to understand the true relationship between brain structure and function. To address this challenge, here we ask which, and to what extent, aspects of the structural organisation of the brain are conserved if we preserve specific spatial and topological properties of the brain but otherwise randomise its connectivity. We perform a comparative analysis of a connectivity map of the cortical connectome both on high- and low-resolutions utilising three different types of surrogate networks: spatially unconstrained (‘random’), connection length preserving (‘spatial’), and connection length optimised (‘reduced’) surrogates. We find that unconstrained randomisation markedly diminishes all investigated architectural properties of cortical connectivity. By contrast, spatial and reduced surrogates largely preserve most properties and, interestingly, often more so in the reduced surrogates. Specifically, our results suggest that the cortical network is less tightly integrated than its spatial constraints would allow, but more strongly segregated than its spatial constraints would necessitate. We additionally find that hierarchical organisation and rich-club structure of the cortical connectivity are largely preserved in spatial and reduced surrogates and hence may be partially attributable to cortical wiring constraints. In contrast, the high modularity and strong s-core of the high-resolution cortical network are significantly stronger than in the surrogates, underlining their potential functional relevance in the brain.
Author Summary
Macroscopic regions in the grey matter of the human brain are intricately connected by white-matter pathways, forming the extremely complex network of the brain. Analysing this brain network may provide us insights on how anatomy enables brain function and, ultimately, cognition and consciousness. Various important principles of organization have indeed been consistently identified in the brain's structural connectivity, such as a small-world and modular architecture. However, it is currently unclear which of these principles are functionally relevant, and which are merely the consequence of more basic constraints of the brain, such as its three-dimensional spatial embedding into the limited volume of the skull or the high metabolic cost of long-range connections. In this paper, we model what aspects of the structural organization of the brain are affected by its wiring constraints by assessing how far these aspects are preserved in brain-like networks with varying spatial wiring constraints. We find that all investigated features of brain organization also appear in spatially constrained networks, but we also discover that several of the features are more pronounced in the brain than its wiring constraints alone would necessitate. These findings suggest the functional relevance of the ‘over-expressed’ properties of brain architecture.
PMCID: PMC3974635  PMID: 24699277
2.  Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations 
PLoS Computational Biology  2014;10(3):e1003512.
It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings.
Author Summary
The world is not perceived as a chain of segmented sensory still lifes. Instead, it appears that the brain is capable of integrating the temporal dependencies of the incoming sensory stream with the spatial aspects of that input. It then transfers the resulting whole in a useful manner, in order to reach a coherent and causally sound image of our physical surroundings, and to act within it. These spatiotemporal computations are made possible through a cluster of local and coexisting adaptation mechanisms known collectively as neuronal plasticity. While this role is widely known and supported by experimental evidence, no unifying theory of how the brain, through the interaction of plasticity mechanisms, gets to represent spatiotemporal computations in its spatiotemporal activity. In this paper, we aim at such a theory. We develop a rigorous mathematical formalism of spatiotemporal representations within the input-driven dynamics of cortical networks. We demonstrate that the interaction of two of the most common plasticity mechanisms, intrinsic and synaptic plasticity, leads to representations that allow for spatiotemporal computations. We also show that these representations are structured to tolerate noise and to even benefit from it.
PMCID: PMC3961183  PMID: 24651447
3.  Dynamic Alignment Models for Neural Coding 
PLoS Computational Biology  2014;10(3):e1003508.
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes.
Author Summary
The brain computes using electrical discharges of nerve cells, so called spikes. Specific sensory stimuli, for instance, tones, often lead to specific spiking patterns. The same is true for behavior: specific motor actions are generated by specific spiking patterns. The relationship between neural activity and stimuli or motor actions can be difficult to infer, because of dynamic dependencies and hidden nonlinearities. For instance, in a freely behaving animal a neuron could exhibit variable levels of sensory and motor involvements depending on the state of the animal and on current motor plans—a situation that cannot be accounted for by many existing models. Here we present a new type of model that is specifically designed to cope with such changing regularities. We outline the mathematical framework and show, through computer simulations and application to recorded neural data, how MPHs can advance our understanding of stimulus-response relationships.
PMCID: PMC3952821  PMID: 24625448
4.  Optimal Recall from Bounded Metaplastic Synapses: Predicting Functional Adaptations in Hippocampal Area CA3 
PLoS Computational Biology  2014;10(2):e1003489.
A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments.
Author Summary
Memory is central to nervous system function and has been a particular focus for studies of the hippocampus. However, despite many clues, we understand little about how memory storage and retrieval is implemented in neural circuits. In particular, while many previous studies considered the amount of information that can be stored in synaptic connections under biological constraints on the dynamic range of synapses, how much of this information can be successfully recovered by neural dynamics during memory retrieval remains unclear. Here, we use a top-down approach to address this question: we assume memories are laid down in bounded synapses by biologically relevant plasticity rules and then derive from first principles how the neural circuit should behave during recall in order to retrieve these memories most efficiently. We show that the resulting recall dynamics are consistent with a wide variety of properties of hippocampal area CA3, across a range of biophysical levels – from synapses, through neurons, to circuits. Furthermore, our approach allows us to make novel and experimentally testable predictions about the link between the structure, dynamics, and function of CA3 circuitry.
PMCID: PMC3937414  PMID: 24586137
5.  Selectivity and Sparseness in Randomly Connected Balanced Networks 
PLoS ONE  2014;9(2):e89992.
Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the “paradoxical” effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.
PMCID: PMC3933683  PMID: 24587172
6.  Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations 
PLoS Computational Biology  2014;10(2):e1003466.
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful.
Author Summary
Acquisition of responses towards full predictors of rewards, namely Pavlovian conditioning, has long been explained using the reinforcement learning theory. This theory formalizes learning processes that, by attributing values to situations and actions, makes it possible to direct behaviours towards rewarding objectives. Interestingly, the implied mechanisms rely on a reinforcement signal that parallels the activity of dopamine neurons in such experiments. However, recent studies challenged the classical view of explaining Pavlovian conditioning with a single process. When presented with a lever whose retraction preceded the delivery of food, some rats started to chew and bite the food magazine whereas others chew and bite the lever, even if no interactions were necessary to get the food. These differences were also visible in brain activity and when tested with drugs, suggesting the coexistence of multiple systems. We present a computational model that extends the classical theory to account for these data. Interestingly, we can draw predictions from this model that may be experimentally verified. Inspired by mechanisms used to model instrumental behaviours, where actions are required to get rewards, and advanced Pavlovian behaviours (such as overexpectation, negative patterning), it offers an entry point to start modelling the strong interactions observed between them.
PMCID: PMC3923662  PMID: 24550719
7.  A Morpho-Density Approach to Estimating Neural Connectivity 
PLoS ONE  2014;9(1):e86526.
Neuronal signal integration and information processing in cortical neuronal networks critically depend on the organization of synaptic connectivity. Because of the challenges involved in measuring a large number of neurons, synaptic connectivity is difficult to determine experimentally. Current computational methods for estimating connectivity typically rely on the juxtaposition of experimentally available neurons and applying mathematical techniques to compute estimates of neural connectivity. However, since the number of available neurons is very limited, these connectivity estimates may be subject to large uncertainties. We use a morpho-density field approach applied to a vast ensemble of model-generated neurons. A morpho-density field (MDF) describes the distribution of neural mass in the space around the neural soma. The estimated axonal and dendritic MDFs are derived from 100,000 model neurons that are generated by a stochastic phenomenological model of neurite outgrowth. These MDFs are then used to estimate the connectivity between pairs of neurons as a function of their inter-soma displacement. Compared with other density-field methods, our approach to estimating synaptic connectivity uses fewer restricting assumptions and produces connectivity estimates with a lower standard deviation. An important requirement is that the model-generated neurons reflect accurately the morphology and variation in morphology of the experimental neurons used for optimizing the model parameters. As such, the method remains subject to the uncertainties caused by the limited number of neurons in the experimental data set and by the quality of the model and the assumptions used in creating the MDFs and in calculating estimating connectivity. In summary, MDFs are a powerful tool for visualizing the spatial distribution of axonal and dendritic densities, for estimating the number of potential synapses between neurons with low standard deviation, and for obtaining a greater understanding of the relationship between neural morphology and network connectivity.
PMCID: PMC3906031  PMID: 24489738
8.  The Brain Ages Optimally to Model Its Environment: Evidence from Sensory Learning over the Adult Lifespan 
PLoS Computational Biology  2014;10(1):e1003422.
The aging brain shows a progressive loss of neuropil, which is accompanied by subtle changes in neuronal plasticity, sensory learning and memory. Neurophysiologically, aging attenuates evoked responses—including the mismatch negativity (MMN). This is accompanied by a shift in cortical responsivity from sensory (posterior) regions to executive (anterior) regions, which has been interpreted as a compensatory response for cognitive decline. Theoretical neurobiology offers a simpler explanation for all of these effects—from a Bayesian perspective, as the brain is progressively optimized to model its world, its complexity will decrease. A corollary of this complexity reduction is an attenuation of Bayesian updating or sensory learning. Here we confirmed this hypothesis using magnetoencephalographic recordings of the mismatch negativity elicited in a large cohort of human subjects, in their third to ninth decade. Employing dynamic causal modeling to assay the synaptic mechanisms underlying these non-invasive recordings, we found a selective age-related attenuation of synaptic connectivity changes that underpin rapid sensory learning. In contrast, baseline synaptic connectivity strengths were consistently strong over the decades. Our findings suggest that the lifetime accrual of sensory experience optimizes functional brain architectures to enable efficient and generalizable predictions of the world.
Author Summary
While studies of aging are widely framed in terms of their demarcation of degenerative processes, the brain provides a unique opportunity to uncover the adaptive effects of getting older. Though intuitively reasonable, that life-experience and wisdom should reside somewhere in human cortex, these features have eluded neuroscientific explanation. The present study utilizes a “Bayesian Brain” framework to motivate an analysis of cortical circuit processing. From a Bayesian perspective, the brain represents a model of its environment and offers predictions about the world, while responding, through changing synaptic strengths to novel interactions and experiences. We hypothesized that these predictive and updating processes are modified as we age, representing an optimization of neuronal architecture. Using novel sensory stimuli we demonstrate that synaptic connections of older brains resist trial by trial learning to provide a robust model of their sensory environment. These older brains are capable of processing a wider range of sensory inputs – representing experienced generalists. We thus explain how, contrary to a singularly degenerative point-of-view, aging neurobiological effects may be understood, in sanguine terms, as adaptive and useful.
PMCID: PMC3900375  PMID: 24465195
9.  Consequences of Converting Graded to Action Potentials upon Neural Information Coding and Energy Efficiency 
PLoS Computational Biology  2014;10(1):e1003439.
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Author Summary
As in electronics, many of the brain's neural circuits convert continuous time signals into a discrete-time binary code. Although some neurons use only graded voltage signals, most convert these signals into discrete-time action potentials. Yet the costs and benefits associated with such a switch in signalling mechanism are largely unexplored. We investigate why the conversion of graded potentials to action potentials is accompanied by substantial information loss and how this changes energy efficiency. Action potentials are generated by a large cohort of noisy Na+ channels. We show that this channel noise and the added non-linearity of Na+ channels destroy input information provided by graded generator potentials. Furthermore, action potentials themselves cause information loss due to their finite widths because the neuron is oblivious to the input that is arriving during an action potential. Consequently, neurons with high firing rates lose a large amount of the information in their inputs. The additional cost incurred by voltage-gated Na+ channels also means that action potentials can encode less information per unit energy, proving metabolically inefficient, and suggesting penalisation of high firing rates in the nervous system.
PMCID: PMC3900385  PMID: 24465197
10.  Association of Structural Global Brain Network Properties with Intelligence in Normal Aging 
PLoS ONE  2014;9(1):e86258.
Higher general intelligence attenuates age-associated cognitive decline and the risk of dementia. Thus, intelligence has been associated with cognitive reserve or resilience in normal aging. Neurophysiologically, intelligence is considered as a complex capacity that is dependent on a global cognitive network rather than isolated brain areas. An association of structural as well as functional brain network characteristics with intelligence has already been reported in young adults. We investigated the relationship between global structural brain network properties, general intelligence and age in a group of 43 cognitively healthy elderly, age 60–85 years. Individuals were assessed cross-sectionally using Wechsler Adult Intelligence Scale-Revised (WAIS-R) and diffusion-tensor imaging. Structural brain networks were reconstructed individually using deterministic tractography, global network properties (global efficiency, mean shortest path length, and clustering coefficient) were determined by graph theory and correlated to intelligence scores within both age groups. Network properties were significantly correlated to age, whereas no significant correlation to WAIS-R was observed. However, in a subgroup of 15 individuals aged 75 and above, the network properties were significantly correlated to WAIS-R. Our findings suggest that general intelligence and global properties of structural brain networks may not be generally associated in cognitively healthy elderly. However, we provide first evidence of an association between global structural brain network properties and general intelligence in advanced elderly. Intelligence might be affected by age-associated network deterioration only if a certain threshold of structural degeneration is exceeded. Thus, age-associated brain structural changes seem to be partially compensated by the network and the range of this compensation might be a surrogate of cognitive reserve or brain resilience.
PMCID: PMC3899224  PMID: 24465994
11.  The Correlation Structure of Local Neuronal Networks Intrinsically Results from Recurrent Dynamics 
PLoS Computational Biology  2014;10(1):e1003428.
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.
Author Summary
The co-occurrence of action potentials of pairs of neurons within short time intervals has been known for a long time. Such synchronous events can appear time-locked to the behavior of an animal, and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently, the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover, the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.
PMCID: PMC3894226  PMID: 24453955
12.  Intrinsic Noise Induces Critical Behavior in Leaky Markovian Networks Leading to Avalanching 
PLoS Computational Biology  2014;10(1):e1003411.
The role intrinsic statistical fluctuations play in creating avalanches – patterns of complex bursting activity with scale-free properties – is examined in leaky Markovian networks. Using this broad class of models, we develop a probabilistic approach that employs a potential energy landscape perspective coupled with a macroscopic description based on statistical thermodynamics. We identify six important thermodynamic quantities essential for characterizing system behavior as a function of network size: the internal potential energy, entropy, free potential energy, internal pressure, pressure, and bulk modulus. In agreement with classical phase transitions, these quantities evolve smoothly as a function of the network size until a critical value is reached. At that value, a discontinuity in pressure is observed that leads to a spike in the bulk modulus demarcating loss of thermodynamic robustness. We attribute this novel result to a reallocation of the ground states (global minima) of the system's stationary potential energy landscape caused by a noise-induced deformation of its topographic surface. Further analysis demonstrates that appreciable levels of intrinsic noise can cause avalanching, a complex mode of operation that dominates system dynamics at near-critical or subcritical network sizes. Illustrative examples are provided using an epidemiological model of bacterial infection, where avalanching has not been characterized before, and a previously studied model of computational neuroscience, where avalanching was erroneously attributed to specific neural architectures. The general methods developed here can be used to study the emergence of avalanching (and other complex phenomena) in many biological, physical and man-made interaction networks.
Author Summary
Networks of noisy interacting components arise in diverse scientific disciplines. Here, we develop a mathematical framework to study the underlying causes of a bursting phenomenon in network activity known as avalanching. As prototypical examples, we study a model of disease spreading in a population of individuals and a model of brain activity in a neural network. Although avalanching is well-documented in neural networks, thought to be crucial for learning, information processing, and memory, it has not been studied before in disease spreading. We employ tools originally used to analyze thermodynamic systems to argue that randomness in the actions of individual network components plays a fundamental role in avalanche formation. We show that avalanching is a spontaneous behavior, brought about by a phenomenon reminiscent to a phase transition in statistical mechanics, caused by increasing randomness as the network size decreases. Our work demonstrates that a previously suggested balanced feed-forward network structure is not necessary for neuronal avalanching. Instead, we attribute avalanching to a reallocation of the global minima of the network's stationary potential energy landscape, caused by a noise-induced deformation of its topographic surface.
PMCID: PMC3886886  PMID: 24415927
13.  Communication Efficiency and Congestion of Signal Traffic in Large-Scale Brain Networks 
PLoS Computational Biology  2014;10(1):e1003427.
The complex connectivity of the cerebral cortex suggests that inter-regional communication is a primary function. Using computational modeling, we show that anatomical connectivity may be a major determinant for global information flow in brain networks. A macaque brain network was implemented as a communication network in which signal units flowed between grey matter nodes along white matter paths. Compared to degree-matched surrogate networks, information flow on the macaque brain network was characterized by higher loss rates, faster transit times and lower throughput, suggesting that neural connectivity may be optimized for speed rather than fidelity. Much of global communication was mediated by a “rich club” of hub regions: a sub-graph comprised of high-degree nodes that are more densely interconnected with each other than predicted by chance. First, macaque communication patterns most closely resembled those observed for a synthetic rich club network, but were less similar to those seen in a synthetic small world network, suggesting that the former is a more fundamental feature of brain network topology. Second, rich club regions attracted the most signal traffic and likewise, connections between rich club regions carried more traffic than connections between non-rich club regions. Third, a number of rich club regions were significantly under-congested, suggesting that macaque connectivity actively shapes information flow, funneling traffic towards some nodes and away from others. Together, our results indicate a critical role of the rich club of hub nodes in dynamic aspects of global brain communication.
Author Summary
A fundamental question in systems neuroscience is how the structural connectivity of the cerebral cortex shapes global communication. Here, using computational modeling in conjunction with an anatomically realistic structural network, we show that cortico-cortical communication is constrained by high-level features of brain network topology. We find that neural network topology is configured in a way that prioritizes speed of information flow over reliability and total throughput. The defining characteristic of the information processing architecture of the network is a densely interconnected rich club of hub nodes. Namely, rich club nodes and connections between rich club nodes absorb the greatest proportion of total signal traffic. In addition, rich club connectivity appears to actively shape information flow, whereby signal traffic is biased towards some nodes and away from others. Finally, synthetic networks containing a rich club could almost perfectly reproduce the information flow patterns of the real anatomical network. Altogether, our data demonstrate that a central collective of highly interconnected hubs serves to facilitate cortico-cortical communication. By simulating communication on a static structural network we have revealed a dynamic aspect of the global information processing architecture and the critical role played by the rich club of hub nodes.
PMCID: PMC3886893  PMID: 24415931
14.  Environmental Influence on the Evolution of Morphological Complexity in Machines 
PLoS Computational Biology  2014;10(1):e1003399.
Whether, when, how, and why increased complexity evolves in biological populations is a longstanding open question. In this work we combine a recently developed method for evolving virtual organisms with an information-theoretic metric of morphological complexity in order to investigate how the complexity of morphologies, which are evolved for locomotion, varies across different environments. We first demonstrate that selection for locomotion results in the evolution of organisms with morphologies that increase in complexity over evolutionary time beyond what would be expected due to random chance. This provides evidence that the increase in complexity observed is a result of a driven rather than a passive trend. In subsequent experiments we demonstrate that morphologies having greater complexity evolve in complex environments, when compared to a simple environment when a cost of complexity is imposed. This suggests that in some niches, evolution may act to complexify the body plans of organisms while in other niches selection favors simpler body plans.
Author Summary
The evolution of complexity, a central issue of evolutionary theory since Darwin's time, remains a controversial topic. One particular question of interest is how the complexity of an organism's body plan (morphology) is influenced by the complexity of the environment in which it evolved. Ideally, it would be desirable to perform investigations on living organisms in which environmental complexity is under experimental control, but our ability to do so in a limited timespan and in a controlled manner is severely constrained. In lieu of such studies, here we employ computer simulations capable of evolving the body plans of virtual organisms to investigate this question in silico. By evolving virtual organisms for locomotion in a variety of environments, we are able to demonstrate that selecting for locomotion causes more complex morphologies to evolve than would be expected solely due to random chance. Moreover, if increased complexity incurs a cost (as it is thought to do in biology), then more complex environments tend to lead to the evolution of more complex body plans than those that evolve in a simpler environment. This result supports the idea that the morphological complexity of organisms is influenced by the complexity of the environments in which they evolve.
PMCID: PMC3879106  PMID: 24391483
15.  Population Decoding in Rat Barrel Cortex: Optimizing the Linear Readout of Correlated Population Responses 
PLoS Computational Biology  2014;10(1):e1003415.
Sensory information is encoded in the response of neuronal populations. How might this information be decoded by downstream neurons? Here we analyzed the responses of simultaneously recorded barrel cortex neurons to sinusoidal vibrations of varying amplitudes preceded by three adapting stimuli of 0, 6 and 12 µm in amplitude. Using the framework of signal detection theory, we quantified the performance of a linear decoder which sums the responses of neurons after applying an optimum set of weights. Optimum weights were found by the analytical solution that maximized the average signal-to-noise ratio based on Fisher linear discriminant analysis. This provided a biologically plausible decoder that took into account the neuronal variability, covariability, and signal correlations. The optimal decoder achieved consistent improvement in discrimination performance over simple pooling. Decorrelating neuronal responses by trial shuffling revealed that, unlike pooling, the performance of the optimal decoder was minimally affected by noise correlation. In the non-adapted state, noise correlation enhanced the performance of the optimal decoder for some populations. Under adaptation, however, noise correlation always degraded the performance of the optimal decoder. Nonetheless, sensory adaptation improved the performance of the optimal decoder mainly by increasing signal correlation more than noise correlation. Adaptation induced little systematic change in the relative direction of signal and noise. Thus, a decoder which was optimized under the non-adapted state generalized well across states of adaptation.
Author Summary
In the natural environment, animals are constantly exposed to sensory stimulation. A key question in systems neuroscience is how attributes of a sensory stimulus can be “read out” from the activity of a population of brain cells. We chose to investigate this question in the whisker-mediated touch system of rats because of its well-established anatomy and exquisite functionality. The whisker system is one of the major channels through which rodents acquire sensory information about their surrounding environment. The response properties of brain cells dynamically adjust to the prevailing diet of sensory stimulation, a process termed sensory adaptation. Here, we applied a biologically plausible scheme whereby different brain cells contribute to sensory readout with different weights. We established the set of weights that provide the optimal readout under different states of adaptation. The results yield an upper bound for the efficiency of coding sensory information. We found that the ability to decode sensory information improves with adaptation. However, a readout mechanism that does not adjust to the state of adaptation can still perform remarkably well.
PMCID: PMC3879135  PMID: 24391487
16.  Searching for Collective Behavior in a Large Network of Sensory Neurons 
PLoS Computational Biology  2014;10(1):e1003408.
Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such “K-pairwise” models—being systematic extensions of the previously used pairwise Ising models—provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
Author Summary
Sensory neurons encode information about the world into sequences of spiking and silence. Multi-electrode array recordings have enabled us to move from single units to measuring the responses of many neurons simultaneously, and thus to ask questions about how populations of neurons as a whole represent their input signals. Here we build on previous work that has shown that in the salamander retina, pairs of retinal ganglion cells are only weakly correlated, yet the population spiking activity exhibits large departures from a model where the neurons would be independent. We analyze data from more than a hundred salamander retinal ganglion cells and characterize their collective response using maximum entropy models of statistical physics. With these models in hand, we can put bounds on the amount of information encoded by the neural population, constructively demonstrate that the code has error correcting redundancy, and advance two hypotheses about the neural code: that collective states of the network could carry stimulus information, and that the distribution of neural activity patterns has very nontrivial statistical properties, possibly related to critical systems in statistical physics.
PMCID: PMC3879139  PMID: 24391485
17.  Encoding of Natural Sounds at Multiple Spectral and Temporal Resolutions in the Human Auditory Cortex 
PLoS Computational Biology  2014;10(1):e1003412.
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex.
Author Summary
How does the human brain analyze natural sounds? Previous functional neuroimaging research could only describe the response patterns that sounds evoke in the human brain at the level of preferential regional activations. A comprehensive account of the neural basis of human hearing, however, requires deriving computational models that are able to provide quantitative predictions of brain responses to natural sounds. Here, we make a significant step in this direction by combining functional magnetic resonance imaging (fMRI) with computational modeling. We compare competing computational models of sound representations and select the model that most accurately predicts the measured fMRI response patterns. The computational models describe the processing of three relevant properties of natural sounds: frequency, temporal modulations and spectral modulations. We find that a model that represents spectral and temporal modulations jointly and in a frequency-dependent fashion provides the best account of fMRI responses and that the functional specialization of auditory cortical fields can be partially accounted for by their modulation tuning. Our results provide insights on how natural sounds are encoded in human auditory cortex and our methodological approach constitutes an advance in the way this question can be addressed in future studies.
PMCID: PMC3879146  PMID: 24391486
18.  Electrodiffusive Model for Astrocytic and Neuronal Ion Concentration Dynamics 
PLoS Computational Biology  2013;9(12):e1003386.
The cable equation is a proper framework for modeling electrical neural signalling that takes place at a timescale at which the ionic concentrations vary little. However, in neural tissue there are also key dynamic processes that occur at longer timescales. For example, endured periods of intense neural signaling may cause the local extracellular K+-concentration to increase by several millimolars. The clearance of this excess K+ depends partly on diffusion in the extracellular space, partly on local uptake by astrocytes, and partly on intracellular transport (spatial buffering) within astrocytes. These processes, that take place at the time scale of seconds, demand a mathematical description able to account for the spatiotemporal variations in ion concentrations as well as the subsequent effects of these variations on the membrane potential. Here, we present a general electrodiffusive formalism for modeling of ion concentration dynamics in a one-dimensional geometry, including both the intra- and extracellular domains. Based on the Nernst-Planck equations, this formalism ensures that the membrane potential and ion concentrations are in consistency, it ensures global particle/charge conservation and it accounts for diffusion and concentration dependent variations in resistivity. We apply the formalism to a model of astrocytes exchanging ions with the extracellular space. The simulations show that K+-removal from high-concentration regions is driven by a local depolarization of the astrocyte membrane, which concertedly (i) increases the local astrocytic uptake of K+, (ii) suppresses extracellular transport of K+, (iii) increases axial transport of K+ within astrocytes, and (iv) facilitates astrocytic relase of K+ in regions where the extracellular concentration is low. Together, these mechanisms seem to provide a robust regulatory scheme for shielding the extracellular space from excess K+.
Author Summary
When neurons generate electrical signals they release potassium ions (K+) into the extracellular space. During periods of intense neural activity, the local extracellular K+ may increase drastically. If it becomes too high, it can lead to neural dysfunction. Astrocytes (a kind of glial cells) are involved in preventing this from happening. Astrocytes can take up excess K+, transport it intracellularly, and release it in regions where the concentration is lower. This process is called spatial buffering, and a full mechanistic understanding of it is currently lacking. The aim of this work is twofold: First, we develop a formalism for modeling ion concentration dynamics in the intra- and extracellular space. The formalism is general, and could be used to simulate many cellular processes. It accounts for ion transports due to diffusion (along concentration gradients) as well as electrical migration (along voltage gradients). It extends previous, related formalisms, which have focused only on intracellular dynamics. Secondly, we apply the formalism to model how astrocytes exchange ions with the extracellular space. We conclude that the membrane mechanisms possessed by astrocytes seem optimal for shielding the extracellular space from excess K+, and provide a full mechanistic description of the spatial (K+) buffering process.
PMCID: PMC3868551  PMID: 24367247
19.  Dread and the Disvalue of Future Pain 
PLoS Computational Biology  2013;9(11):e1003335.
Standard theories of decision-making involving delayed outcomes predict that people should defer a punishment, whilst advancing a reward. In some cases, such as pain, people seem to prefer to expedite punishment, implying that its anticipation carries a cost, often conceptualized as ‘dread’. Despite empirical support for the existence of dread, whether and how it depends on prospective delay is unknown. Furthermore, it is unclear whether dread represents a stable component of value, or is modulated by biases such as framing effects. Here, we examine choices made between different numbers of painful shocks to be delivered faithfully at different time points up to 15 minutes in the future, as well as choices between hypothetical painful dental appointments at time points of up to approximately eight months in the future, to test alternative models for how future pain is disvalued. We show that future pain initially becomes increasingly aversive with increasing delay, but does so at a decreasing rate. This is consistent with a value model in which moment-by-moment dread increases up to the time of expected pain, such that dread becomes equivalent to the discounted expectation of pain. For a minority of individuals pain has maximum negative value at intermediate delay, suggesting that the dread function may itself be prospectively discounted in time. Framing an outcome as relief reduces the overall preference to expedite pain, which can be parameterized by reducing the rate of the dread-discounting function. Our data support an account of disvaluation for primary punishments such as pain, which differs fundamentally from existing models applied to financial punishments, in which dread exerts a powerful but time-dependent influence over choice.
Author Summary
People often prefer to ‘get pain out of the way’, treating pain in the future as more significant than pain now. One explanation, termed ‘dread’, is that anticipating pain is unpleasant or disadvantageous, rather like pain itself. Human brain imaging studies support the existence of dread, though it is unknown whether and how dread depends on the timing of future pain. We address this question by offering people decisions between moderately painful stimuli, and separately between imagined painful dental appointments occurring at different time points in the future, and use their choices to estimate dread. We show that future pain initially becomes more unpleasant when it is delayed, but as pain is moved further into the future, the effect of delay decreases. This is consistent with dread increasing as anticipated pain draws nearer, which is then combined with a general (and opposing) tendency to down-weight the significance of future events. We also show that dread can be attenuated by describing pain in terms of relief from an imagined even more severe pain. These observations reveal important principles about how people estimate the value of anticipated pain – relevant to a diverse range of human emotion and behavior.
PMCID: PMC3836706  PMID: 24277999
20.  Stochastic Computations in Cortical Microcircuit Models 
PLoS Computational Biology  2013;9(11):e1003311.
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.
Author Summary
The brain has not only the capability to process sensory input, but it can also produce predictions, imaginations, and solve problems that combine learned knowledge with information about a new scenario. But although these more complex information processing capabilities lie at the heart of human intelligence, we still do not know how they are organized and implemented in the brain. Numerous studies in cognitive science and neuroscience conclude that many of these processes involve probabilistic inference. This suggests that neuronal circuits in the brain process information in the form of probability distributions, but we are missing insight into how complex distributions could be represented and stored in large and diverse networks of neurons in the brain. We prove in this article that realistic cortical microcircuit models can store complex probabilistic knowledge by embodying probability distributions in their inherent stochastic dynamics – yielding a knowledge representation in which typical probabilistic inference problems such as marginalization become straightforward readout tasks. We show that in cortical microcircuit models such computations can be performed satisfactorily within a few . Furthermore, we demonstrate how internally stored distributions can be programmed in a simple manner to endow a neural circuit with powerful problem solving capabilities.
PMCID: PMC3828141  PMID: 24244126
21.  Predictive Coding of Dynamical Variables in Balanced Spiking Networks 
PLoS Computational Biology  2013;9(11):e1003258.
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.
Author Summary
Two observations about the cortex have puzzled and fascinated neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks representing information reliably and with a small number of spikes. To achieve such efficiency, spikes of individual neurons must communicate prediction errors about a common population-level signal, automatically resulting in balanced excitation and inhibition and highly variable neural responses. We illustrate our approach by focusing on the implementation of linear dynamical systems. Among other things, this allows us to construct a network of spiking neurons that can integrate input signals, yet is robust against many perturbations. Most importantly, our approach shows that neural variability cannot be equated to noise. Despite exhibiting the same single unit properties as other widely used network models, our balanced networks are orders of magnitudes more reliable. Our results suggest that the precision of cortical representations has been strongly underestimated.
PMCID: PMC3828152  PMID: 24244113
22.  Average Is Optimal: An Inverted-U Relationship between Trial-to-Trial Brain Activity and Behavioral Performance 
PLoS Computational Biology  2013;9(11):e1003348.
It is well known that even under identical task conditions, there is a tremendous amount of trial-to-trial variability in both brain activity and behavioral output. Thus far the vast majority of event-related potential (ERP) studies investigating the relationship between trial-to-trial fluctuations in brain activity and behavioral performance have only tested a monotonic relationship between them. However, it was recently found that across-trial variability can correlate with behavioral performance independent of trial-averaged activity. This finding predicts a U- or inverted-U- shaped relationship between trial-to-trial brain activity and behavioral output, depending on whether larger brain variability is associated with better or worse behavior, respectively. Using a visual stimulus detection task, we provide evidence from human electrocorticography (ECoG) for an inverted-U brain-behavior relationship: When the raw fluctuation in broadband ECoG activity is closer to the across-trial mean, hit rate is higher and reaction times faster. Importantly, we show that this relationship is present not only in the post-stimulus task-evoked brain activity, but also in the pre-stimulus spontaneous brain activity, suggesting anticipatory brain dynamics. Our findings are consistent with the presence of stochastic noise in the brain. They further support attractor network theories, which postulate that the brain settles into a more confined state space under task performance, and proximity to the targeted trajectory is associated with better performance.
Author Summary
The human brain is notoriously “noisy”. Even with identical physical sensory inputs and task demands, brain responses and behavioral output vary tremendously from trial to trial. Such brain and behavioral variability and the relationship between them have been the focus of intense neuroscience research for decades. Traditionally, it is thought that the relationship between trial-to-trial brain activity and behavioral performance is monotonic: the highest or lowest brain activity levels are associated with the best behavioral performance. Using invasive recordings in neurosurgical patients, we demonstrate an inverted-U relationship between brain and behavioral variability. Under such a relationship, moderate brain activity is associated with the best performance, while both very low and very high brain activity levels are predictive of compromised performance. These results have significant implications for our understanding of brain functioning. They further support recent theoretical frameworks that view the brain as an active nonlinear dynamical system instead of a passive signal-processing device.
PMCID: PMC3820514  PMID: 24244146
23.  A Neurocomputational Model of the Mismatch Negativity 
PLoS Computational Biology  2013;9(11):e1003288.
The mismatch negativity (MMN) is an event related potential evoked by violations of regularity. Here, we present a model of the underlying neuronal dynamics based upon the idea that auditory cortex continuously updates a generative model to predict its sensory inputs. The MMN is then modelled as the superposition of the electric fields evoked by neuronal activity reporting prediction errors. The process by which auditory cortex generates predictions and resolves prediction errors was simulated using generalised (Bayesian) filtering – a biologically plausible scheme for probabilistic inference on the hidden states of hierarchical dynamical models. The resulting scheme generates realistic MMN waveforms, explains the qualitative effects of deviant probability and magnitude on the MMN – in terms of latency and amplitude – and makes quantitative predictions about the interactions between deviant probability and magnitude. This work advances a formal understanding of the MMN and – more generally – illustrates the potential for developing computationally informed dynamic causal models of empirical electromagnetic responses.
Author Summary
Computational neuroimaging enables quantitative inferences from non-invasive measures of brain activity on the underlying mechanisms. Ultimately, we would like to understand these mechanisms not only in terms of physiology but also in terms of computation. So far, this has not been addressed by mathematical models of neuroimaging data (e.g., dynamic causal models), which have rather focused on ever more detailed inferences about physiology. Here we present the first instance of a dynamic causal model that explains electrophysiological data in terms of computation rather than physiology. Concretely, we predict the mismatch negativity – an event-related potential elicited by regularity violation – from the dynamics of perceptual inference as prescribed by the free energy principle. The resulting model explains the waveform of the mismatch negativity and some of its phenomenological properties at a level of precision that has not been attempted before. This highlights the potential of neurocomputational dynamic causal models to enable inferences from neuroimaging data on neurocomputational mechanisms.
PMCID: PMC3820518  PMID: 24244118
24.  A Brain-Machine Interface for Control of Medically-Induced Coma 
PLoS Computational Biology  2013;9(10):e1003284.
Medically-induced coma is a drug-induced state of profound brain inactivation and unconsciousness used to treat refractory intracranial hypertension and to manage treatment-resistant epilepsy. The state of coma is achieved by continually monitoring the patient's brain activity with an electroencephalogram (EEG) and manually titrating the anesthetic infusion rate to maintain a specified level of burst suppression, an EEG marker of profound brain inactivation in which bursts of electrical activity alternate with periods of quiescence or suppression. The medical coma is often required for several days. A more rational approach would be to implement a brain-machine interface (BMI) that monitors the EEG and adjusts the anesthetic infusion rate in real time to maintain the specified target level of burst suppression. We used a stochastic control framework to develop a BMI to control medically-induced coma in a rodent model. The BMI controlled an EEG-guided closed-loop infusion of the anesthetic propofol to maintain precisely specified dynamic target levels of burst suppression. We used as the control signal the burst suppression probability (BSP), the brain's instantaneous probability of being in the suppressed state. We characterized the EEG response to propofol using a two-dimensional linear compartment model and estimated the model parameters specific to each animal prior to initiating control. We derived a recursive Bayesian binary filter algorithm to compute the BSP from the EEG and controllers using a linear-quadratic-regulator and a model-predictive control strategy. Both controllers used the estimated BSP as feedback. The BMI accurately controlled burst suppression in individual rodents across dynamic target trajectories, and enabled prompt transitions between target levels while avoiding both undershoot and overshoot. The median performance error for the BMI was 3.6%, the median bias was -1.4% and the overall posterior probability of reliable control was 1 (95% Bayesian credibility interval of [0.87, 1.0]). A BMI can maintain reliable and accurate real-time control of medically-induced coma in a rodent model suggesting this strategy could be applied in patient care.
Author Summary
Brain-machine interfaces (BMI) for closed-loop control of anesthesia have the potential to enable fully automated and precise control of brain states in patients requiring anesthesia care. Medically-induced coma is one such drug-induced state in which the brain is profoundly inactivated and unconscious and the electroencephalogram (EEG) pattern consists of bursts of electrical activity alternating with periods of suppression, termed burst suppression. Medical coma is induced to treat refractory intracranial hypertension and uncontrollable seizures. The state of coma is often required for days, making accurate manual control infeasible. We develop a BMI that can automatically and precisely control the level of burst suppression in real time in individual rodents. The BMI consists of novel estimation and control algorithms that take as input the EEG activity, estimate the burst suppression level based on this activity, and use this estimate as feedback to control the drug infusion rate in real time. The BMI maintains precise control and promptly changes the level of burst suppression while avoiding overshoot or undershoot. Our work demonstrates the feasibility of automatic reliable and accurate control of medical coma that can provide considerable therapeutic benefits.
PMCID: PMC3814408  PMID: 24204231
25.  Dynamic functional connectivity: Promise, issues, and interpretations 
NeuroImage  2013;80:10.1016/j.neuroimage.2013.05.079.
The brain must dynamically integrate, coordinate, and respond to internal and external stimuli across multiple time scales. Non-invasive measurements of brain activity with fMRI have greatly advanced our understanding of the large-scale functional organization supporting these fundamental features of brain function. Conclusions from previous resting-state fMRI investigations were based upon static descriptions of functional connectivity (FC), and only recently studies have begun to capitalize on the wealth of information contained within the temporal features of spontaneous BOLD FC. Emerging evidence suggests that dynamic FC metrics may index changes in macroscopic neural activity patterns underlying critical aspects of cognition and behavior, though limitations with regard to analysis and interpretation remain. Here, we review recent findings, methodological considerations, neural and behavioral correlates, and future directions in the emerging field of dynamic FC investigations.
PMCID: PMC3807588  PMID: 23707587
Functional connectivity; Resting state; Dynamics; Spontaneous activity; Functional MRI (fMRI); Fluctuations

Results 1-25 (328)