Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
In the sensory periphery, stimuli are represented by patterns of spikes and silences across a population of sensory neurons. Because the neurons form an interconnected network, the code cannot be understood by looking at single cells alone. Recent recordings in the retina have enabled us to study populations of a hundred or more neurons that carry the visual information into the brain, and thus build probabilistic models of the neural code. Here we present a minimal (maximum entropy) yet powerful extension of well-known linear/nonlinear models for independent neurons, to an interacting population. This model reproduces the behavior of single cells as well as the structure of correlations in neural spiking. Our model predicts much better the complete set of patterns of spiking and silence across a population of cells, allowing us to explore the properties of the stimulus-response mapping, and estimate the information transmission, in bits per second, that the population carries about the stimulus. Our results show that to understand the code, we need to shift our focus from reproducing single-cell properties (such as firing rates) towards understanding the total “vocabulary” of patterns emitted by the population, and that network correlations play a central role in shaping the code of large neural populations.
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.
The sound waves produced by objects in the environment mix together before reaching the ears. Before we can make sense of an auditory scene, our brains must solve the puzzle of how to disassemble the sound waveform into groupings that correspond to the original source signals. How is this feat accomplished? We propose that the auditory system continually scans the structure of incoming signals in search of clues to indicate which pieces belong together. For instance, sound events may belong together if they have similar features, or form part of a clear temporal pattern. However this process is complicated by lack of knowledge of future events and the many possible ways in which even a simple sound sequence can be decomposed. The biological solution is multistability: one possible interpretation of a sound is perceived initially, which then gives way to another interpretation, and so on. We propose a model of auditory multistability, in which fragmental descriptions of the signal compete and cooperate to explain the sound scene. We demonstrate, using simplified experimental stimuli, that the model can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.
The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter , and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of , resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real networks. The discrepancy suggests that there are further relevant factors that are not yet captured here.
What are essential relationships between fundamental physical constraints and the architecture of neural systems? Most existing investigations have considered a single constraint, either wiring cost or processing path efficiency, and little is known about how characteristic neural network features, such as the simultaneous existence of modules and hubs, are related to the constraints from multiple requirements. Here we emphasized the competition between the global wiring cost and an important functional requirement, path efficiency, as factors in forming Macaque cortical connectivity and C. elegans neuronal connections. By comparing real to reconstructed networks using optimization under multiple constraints, we found that several network features are related to the competition of these two constraints, in particular the simultaneous formation of network modules and hubs. However, not all the properties of the real networks could be attributed to these two constraints, suggesting that, likely, there exists additional structural or functional requirements.
Graph theoretical analysis has played a key role in characterizing global features of the topology of complex networks, describing diverse systems such as protein interactions, food webs, social relations and brain connectivity. How system elements communicate with each other depends not only on the structure of the network, but also on the nature of the system's dynamics which are constrained by the amount of knowledge and resources available for communication processes. Complementing widely used measures that capture efficiency under the assumption that communication preferentially follows shortest paths across the network (“routing”), we define analytic measures directed at characterizing network communication when signals flow in a random walk process (“diffusion”). The two dimensions of routing and diffusion efficiency define a morphospace for complex networks, with different network topologies characterized by different combinations of efficiency measures and thus occupying different regions of this space. We explore the relation of network topologies and efficiency measures by examining canonical network models, by evolving networks using a multi-objective optimization strategy, and by investigating real-world network data sets. Within the efficiency morphospace, specific aspects of network topology that differentially favor efficient communication for routing and diffusion processes are identified. Charting regions of the morphospace that are occupied by canonical, evolved or real networks allows inferences about the limits of communication efficiency imposed by connectivity and dynamics, as well as the underlying selection pressures that have shaped network topology.
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.
Classical views on single neuron computation treat dendrites as mere collectors of inputs, that is forwarded to the soma for linear summation and causes a spike output if it is sufficiently large. Such a single neuron model can only compute linearly separable input-output functions, representing a small fraction of all possible functions. Recent experimental findings show that in certain pyramidal cells excitatory inputs can be supra-linearly integrated within a dendritic branch, turning this branch into a spiking dendritic sub-unit. Neurons containing many of these dendritic sub-units can compute both linearly separable and linearly non-separable functions. Nevertheless, other neuron types have dendrites which do not spike because the required voltage gated channels are absent. However, these dendrites sub-linearly sum excitatory inputs turning branches into saturating sub-units. We wanted to test if this last type of non-linear summation is sufficient for a single neuron to compute linearly non-separable functions. Using a combination of Boolean algebra and biophysical modeling, we show that a neuron with a single non-linear dendritic sub-unit whether spiking or saturating is able to compute linearly non-separable functions. Thus, in principle, any neuron with a dendritic tree, even passive, can compute linearly non-separable functions.
As animals move through the world in search of resources, they change course in reaction to both external sensory cues and internally-generated programs. Elucidating the functional logic of complex search algorithms is challenging because the observable actions of the animal cannot be unambiguously assigned to externally- or internally-triggered events. We present a technique that addresses this challenge by assessing quantitatively the contribution of external stimuli and internal processes. We apply this technique to the analysis of rapid turns (“saccades”) of freely flying Drosophila melanogaster. We show that a single scalar feature computed from the visual stimulus experienced by the animal is sufficient to explain a majority (93%) of the turning decisions. We automatically estimate this scalar value from the observable trajectory, without any assumption regarding the sensory processing. A posteriori, we show that the estimated feature field is consistent with previous results measured in other experimental conditions. The remaining turning decisions, not explained by this feature of the visual input, may be attributed to a combination of deterministic processes based on unobservable internal states and purely stochastic behavior. We cannot distinguish these contributions using external observations alone, but we are able to provide a quantitative bound of their relative importance with respect to stimulus-triggered decisions. Our results suggest that comparatively few saccades in free-flying conditions are a result of an intrinsic spontaneous process, contrary to previous suggestions. We discuss how this technique could be generalized for use in other systems and employed as a tool for classifying effects into sensory, decision, and motor categories when used to analyze data from genetic behavioral screens.
Researchers have spent considerable effort studying how specific sensory stimuli elicit behavioral responses and how other behaviors may arise independent of external inputs in conditions of sensory deprivation. Yet an animal in its natural context, such as searching for food or mates, turns both in response to external stimuli and intrinsic, possibly stochastic, decisions. We show how to estimate the contribution of vision and internal causes on the observable behavior of freely flying Drosophila. We developed a dimensionality reduction scheme that finds a one-dimensional feature of the visual stimulus that best predicts turning decisions. This visual feature extraction is consistent with previous literature on visually elicited fly turning and predicts a large majority of turns in the tested environment. The rarity of stimulus-independent events suggests that fly behavior is more deterministic than previously suggested and that, more generally, animal search strategies may be dominated by responses to stimuli with only modest contributions from internal causes.
Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference.
A hot topic in the neurobiology of learning is the idea that there may be two distinct mechanisms for learning in the brain: a model-based learning system in which predictions are made with respect to a rich internal model of the learning environment, versus a “model-free” mechanism in which trial-and-error learning occurs without any rich internal representation of the world. While the focus in the literature to date has been on the role of these mechanisms in instrumental conditioning, almost nothing is known about whether more fundamental kinds of learning such as Pavlovian conditioning also involve model-based processes. Furthermore, nothing is known about the extent to which the amygdala, which is known to be a core structure for Pavlovian learning, contains neural signals consistent with a model-based mechanism. To address this question, we used a novel Pavlovian conditioning task and scanned human volunteers with a special high-resolution fMRI sequence that enabled us to obtain signals within the amygdala with over four times the resolution of conventional imaging protocols. Using this approach in combination with sophisticated computational analyses, we find evidence to suggest that the human amygdala is involved in model-based computations during Pavlovian conditioning.
The cerebellum is a brain structure which has been traditionally devoted to supervised learning. According to this theory, plasticity at the Parallel Fiber (PF) to Purkinje Cell (PC) synapses is guided by the Climbing fibers (CF), which encode an ‘error signal’. Purkinje cells have thus been modeled as perceptrons, learning input/output binary associations. At maximal capacity, a perceptron with excitatory weights expresses a large fraction of zero-weight synapses, in agreement with experimental findings. However, numerous experiments indicate that the firing rate of Purkinje cells varies in an analog, not binary, manner. In this paper, we study the perceptron with analog inputs and outputs. We show that the optimal input has a sparse binary distribution, in good agreement with the burst firing of the Granule cells. In addition, we show that the weight distribution consists of a large fraction of silent synapses, as in previously studied binary perceptron models, and as seen experimentally.
Learning properties of neuronal networks have been extensively studied using methods from statistical physics. However, most of these studies ignore a fundamental constraint in networks of real neurons: synapses are either excitatory or inhibitory, and cannot change sign during learning. Here, we characterize the optimal storage properties of an analog perceptron with excitatory synapses, as a simplified model for cerebellar Purkinje cells. The information storage capacity is shown to be optimized when inputs have a sparse binary distribution, while the weight distribution at maximal capacity consists of a large amount of zero-weight synapses. Both features are in agreement with electrophysiological data.
The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors.
The ability to predict one's environment is crucial for adaptive and proactive behaviour. It requires learning a mental model that captures the environment's statistical regularities. A process of this sort is thought to be reflected by the mismatch negativity (MMN) potential, a non-invasive electrophysiological measure of the neural response to regularity violation by sensory stimuli. However, the exact computational processes reflected by the MMN remain a matter of debate. We developed a modelling framework in which competing hypotheses about these processes can be objectively compared by their ability to predict single-trial MMN amplitudes. We applied this framework to formalize five major MMN theories and propose a unifying view on three distinct theories which explain the MMN as a reflection of prediction errors, model adjustment, and novelty detection, respectively. We assessed our models of the five theories with EEG data from eight healthy volunteers. Our results are consistent with the idea that the MMN arises from prediction error driven adjustments of a probabilistic mental model of the environment.
Recent advances in non-invasive neuroimaging have enabled the measurement of connections between distant regions in the living human brain, thus opening up a new field of research: Human connectomics. Different imaging modalities allow the mapping of structural connections (axonal fiber tracts) as well as functional connections (correlations in time series), and individual variations in these connections may be related to individual variations in behaviour and cognition. Connectivity analysis has already led to several important advances. Segregated brain regions may be identified by their unique patterns of connectivity, structural and functional connectivity may be compared to elucidate how dynamic interactions arise from the anatomical substrate, and the architecture of large-scale networks connecting sets of brain regions may be analyzed in detail. The combination of structural and functional connectivity has begun to reveal key patterns of human brain organization, such as the existence of distinct modules or sub-networks that become engaged in different cognitive tasks. Collectively, advances in human connectomics open up the possibility of studying how brain connections mediate regional brain function and thence behaviour.
Neuroimaging; Network; Neuroanatomy; Connectome; Diffusion Imaging; fMRI; Resting-State
The interest in saccadic IOR is funneled by the hypothesis that it serves a clear functional purpose in the selection of fixation points: the facilitation of foraging. In this study, we arrive at a different interpretation of saccadic IOR. First, we find that return saccades are performed much more often than expected from the statistical properties of saccades and saccade pairs. Second, we find that fixation durations before a saccade are modulated by the relative angle of the saccade, but return saccades show no sign of an additional temporal inhibition. Thus, we do not find temporal saccadic inhibition of return. Interestingly, we find that return locations are more salient, according to empirically measured saliency (locations that are fixated by many observers) as well as stimulus dependent saliency (defined by image features), than regular fixation locations. These results and the finding that return saccades increase the match of individual trajectories with a grand total priority map evidences the return saccades being part of a fixation selection strategy that trades off exploration and exploitation.
Sometimes humans look at the same location twice. To appreciate the importance of this inconspicuous statement you have to consider that we move our eyes several billion (109) times during our lives and that looking at something is a necessary condition to enable conscious visual awareness. Thus, understanding why and how we move our eyes provides a window into our mental life. Here we investigate one heavily discussed aspect of human's fixation selection strategy: whether it inhibits returning to previously fixated locations. We analyze a large data set (more than 550,000 fixations from 235 subjects) and find that, returning to previously fixated locations happens much more often than expected from the statistical properties of eye-movement trajectories. Furthermore, those locations that we return to are not ordinary – they are more salient than locations that we do not return to. Thus, the inconspicuous statement that we look at the same locations twice reveals an important aspect of our strategy to select fixation points: That we trade off exploring our environment against making sure that we have fully comprehended the relevant parts of our environment.
Diffusion tensor imaging (DTI) has been used for mapping the structural network of the human brain. The network can be constructed by choosing various brain regions as nodes and fiber tracts connecting those regions as links. The structural network generated from DTI data can be affected by noise in the scans and the choice of tractography algorithm. This study aimed to examine the effect of the number of seeds in tractography on the variance of structural networks. The variance of the network was characterized using an approach similar to the National Electrical Manufacturers Association (NEMA) standards for measurement of image noise. It was shown that the variance of the network is inversely related to the square root of seed density. Consequently, the number of seeds has a large impact on local characteristics and metrics of the network architecture. As the number of seeds increased, increased stability of structural network metrics was observed. However, more seeds can also lead to more spurious fibers and thus affect nodal degrees and edge weights, and proper thresholding may be necessary to create an appropriate weighted network. Because the variance of the network is also influenced by other imaging factors, further increase in the number of seeds has little effect in reducing the network variance. The selection of the seed number should be a balance between the network variance and computational effort.
Diffusion tensor imaging; Structural brain network; Tractography; Seeds; Variance
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
The computations that brain circuits can perform depend on their wiring. While a wiring diagram is still out of reach for major brain structures such as the neocortex and hippocampus, data on the overall distribution of synaptic connection strengths and the temporal fluctuations of individual synapses have recently become available. Specifically, there exists a small population of very strong and stable synaptic connections, which may form the physiological substrate of life-long memories. This population coexists with a big and ever changing population of much smaller and strongly fluctuating synaptic connections. So far it has remained unclear how these properties of networks in neocortex and hippocampus arise. Here we present a computational model that explains these fundamental properties of neural circuits as a consequence of network self-organization resulting from the combined action of different forms of neuronal plasticity. This self-organization is driven by a rich-get-richer effect induced by an associative synaptic learning mechanism which is kept in check by several homeostatic plasticity mechanisms stabilizing the network. The model highlights the role of self-organization in the formation of brain circuits and parsimoniously explains a range of recent findings about their fundamental properties.
Long-term memories are thought to depend upon the coordinated activation of a broad network of cortical and subcortical brain regions. However, the distributed nature of this representation has made it challenging to define the neural elements of the memory trace, and lesion and electrophysiological approaches provide only a narrow window into what is appreciated a much more global network. Here we used a global mapping approach to identify networks of brain regions activated following recall of long-term fear memories in mice. Analysis of Fos expression across 84 brain regions allowed us to identify regions that were co-active following memory recall. These analyses revealed that the functional organization of long-term fear memories depends on memory age and is altered in mutant mice that exhibit premature forgetting. Most importantly, these analyses indicate that long-term memory recall engages a network that has a distinct thalamic-hippocampal-cortical signature. This network is concurrently integrated and segregated and therefore has small-world properties, and contains hub-like regions in the prefrontal cortex and thalamus that may play privileged roles in memory expression.
Memory retrieval is thought to involve the coordinated activation of multiple regions of the brain, rather than localized activity in a specific region. In order to visualize networks of brain regions activated by recall of a fear memory in mice, we quantified expression of an activity-regulated gene (c-fos) that is induced by neural activity. This allowed us to identify collections of brain regions where Fos expression co-varies across mice, and presumably form components of a network that are co-active during recall of long-term fear memory. This analysis suggested that expression of a long-term fear memory is an emergent property of large scale neural network interactions. This network has a distinct thalamic-hippocampal-cortical signature and, like many real-world networks as well as other anatomical and functional brain networks, has small-world architecture with a subset of highly-connected hub nodes that may play more central roles in memory expression.
Reciprocating exchange with other humans requires individuals to infer the intentions of their partners. Despite the importance of this ability in healthy cognition and its impact in disease, the dimensions employed and computations involved in such inferences are not clear. We used a computational theory-of-mind model to classify styles of interaction in 195 pairs of subjects playing a multi-round economic exchange game. This classification produces an estimate of a subject's depth-of-thought in the game (low, medium, high), a parameter that governs the richness of the models they build of their partner. Subjects in each category showed distinct neural correlates of learning signals associated with different depths-of-thought. The model also detected differences in depth-of-thought between two groups of healthy subjects: one playing patients with psychiatric disease and the other playing healthy controls. The neural response categories identified by this computational characterization of theory-of-mind may yield objective biomarkers useful in the identification and characterization of pathologies that perturb the capacity to model and interact with other humans.
Human social interactions are extraordinarily rich and complex. The ability to infer the intentions of others is essential for successful social interactions. Although most of our inferences about others are silent and subtle, traces of their effects can be found in the behavior we exhibit in various tasks, notably repeated economic exchange games. In this study, we use a computational model that uses an explicit form of other-modeling to classify styles of play in a large cohort of subjects engaging in such a game. We classify players according to their depth of recursive reasoning (depth-of-thought), finding three groups whose performance throughout the task differed according to several measures. Neuroimaging results based on the model classification show a differential neural response to depth-of-thought. The model also detected differences in depth-of-thought between two groups of healthy subjects: one playing patients with psychiatric disease and the other playing healthy controls. These results demonstrate the power of a quantitative approach to examining behavioral heterogeneity during social exchange, and may provide useful biomarkers to characterize mental disorders when the capacity to make inferences about others is impaired.
Understanding the principles governing the dynamic coordination of functional brain networks remains an important unmet goal within neuroscience. How do distributed ensembles of neurons transiently coordinate their activity across a variety of spatial and temporal scales? While a complete mechanistic account of this process remains elusive, evidence suggests that neuronal oscillations may play a key role in this process, with different rhythms influencing both local computation and long-range communication. To investigate this question, we recorded multiple single unit and local field potential (LFP) activity from microelectrode arrays implanted bilaterally in macaque motor areas. Monkeys performed a delayed center-out reach task either manually using their natural arm (Manual Control, MC) or under direct neural control through a brain-machine interface (Brain Control, BC). In accord with prior work, we found that the spiking activity of individual neurons is coupled to multiple aspects of the ongoing motor beta rhythm (10–45 Hz) during both MC and BC, with neurons exhibiting a diversity of coupling preferences. However, here we show that for identified single neurons, this beta-to-rate mapping can change in a reversible and task-dependent way. For example, as beta power increases, a given neuron may increase spiking during MC but decrease spiking during BC, or exhibit a reversible shift in the preferred phase of firing. The within-task stability of coupling, combined with the reversible cross-task changes in coupling, suggest that task-dependent changes in the beta-to-rate mapping play a role in the transient functional reorganization of neural ensembles. We characterize the range of task-dependent changes in the mapping from beta amplitude, phase, and inter-hemispheric phase differences to the spike rates of an ensemble of simultaneously-recorded neurons, and discuss the potential implications that dynamic remapping from oscillatory activity to spike rate and timing may hold for models of computation and communication in distributed functional brain networks.
How is the functional role of a particular neuron established within an ensemble? The concept of a neural tuning curve – the mapping from input variables such as movement direction to output firing rate – has proven useful in investigating neural function. However, prior work shows that tuning curves are not fixed but may be remapped as a function of task demands – presumably via high-level mechanisms of cognitive control. How is this accomplished? Brain rhythms may play a causal role in this process, but the coupling of single cells to network activity remains poorly understood. We investigated the coupling between rhythmic beta activity and spiking as macaques performed two different tasks. This coupling can be described in terms of a function that maps oscillatory amplitude and phase to instantaneous spike rate. Similarly to direction tuning, this “internal” tuning curve also exhibits task-dependent changes. We characterize these changes across a large ensemble of simultaneously-recorded cells, and consider some of the neuro-computational implications presented by cross-level coupling between single cells and large-scale networks. In particular, relative to the slow time-scale of behavior, the observed beta-to-rate mappings may prove useful for modulating winner-take-all dynamics on intermediate time-scales and relative spike timing on fast time-scales.
The mammalian suprachiasmatic nuclei (SCN) contain thousands of neurons capable of generating near 24-h rhythms. When isolated from their network, SCN neurons exhibit a range of oscillatory phenotypes: sustained or damping oscillations, or arrhythmic patterns. The implications of this variability are unknown. Experimentally, we found that cells within SCN explants recover from pharmacologically-induced desynchrony by re-establishing rhythmicity and synchrony in waves, independent of their intrinsic circadian period We therefore hypothesized that a cell's location within the network may also critically determine its resynchronization. To test this, we employed a deterministic, mechanistic model of circadian oscillators where we could independently control cell-intrinsic and network-connectivity parameters. We found that small changes in key parameters produced the full range of oscillatory phenotypes seen in biological cells, including similar distributions of period, amplitude and ability to cycle. The model also predicted that weaker oscillators could adjust their phase more readily than stronger oscillators. Using these model cells we explored potential biological consequences of their number and placement within the network. We found that the population synchronized to a higher degree when weak oscillators were at highly connected nodes within the network. A mathematically independent phase-amplitude model reproduced these findings. Thus, small differences in cell-intrinsic parameters contribute to large changes in the oscillatory ability of a cell, but the location of weak oscillators within the network also critically shapes the degree of synchronization for the population.
Circadian rhythms are daily, near 24-h oscillations in biological processes that nearly all organisms on Earth experience. Single cells contain a molecular clock that drives circadian rhythms in physiology and, when many cells synchronize in a population, daily behaviors. We hypothesized that small differences in intrinsic cellular properties allow for a diversity of circadian periods and amplitudes across cells. We observed circadian cells and their synchrony before, during, and after limiting communication between cells and then compared their intrinsic properties to their resynchronization behavior. We found that arrhythmic, weakly oscillating, and self-sustained circadian cells rejoined the rhythmic population independent of their cell-intrinsic oscillations. Using a mechanistic computational model of circadian cells, we found that resynchronization could be enhanced by including more weak oscillators or by placing weak oscillators at more connected nodes in the network. We conclude that intrinsic properties (e.g. oscillator weakness and responsiveness) and network structure (e.g. positions of weak oscillators) can independently buffer tissue rhythms from perturbations. This reveals how cellular and network properties impose rules on systems of circadian cells that must achieve synchrony from a desynchronized state, for example during perinatal development or when forced to overcome societal constraints on sleep-wake behavior, such as working early or late shifts.
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations.
Decision making requires the selection between alternative actions. It has been suggested that action selection is not separate from motor preparation of the according actions, but rather that the selection emerges from the competition between different movement plans. We expand on this idea, and ask how action selection mechanisms interact with the learning of new action choices. We present a neurodynamic model that provides an integrated account of action selection and the learning of sensorimotor associations. The model explains recent electrophysiological findings from monkeys' sensorimotor cortex, and correctly predicted a newly described characteristic pattern of their choice errors. Based on the model, we present a theory of how geometrical sensorimotor mapping rules can be learned by association without the need for an explicit representation of the transformation rule, and how the learning history of these associations can have a direct influence on later decision making.
In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps.
Neurons in the visual cortex form spatial representations or maps of several stimulus features. How are different spatial representations of visual information coordinated in the brain? In this paper, we study the hypothesis that the coordinated organization of several visual cortical maps can be explained by joint optimization. Previous attempts to explain the spatial layout of functional maps in the visual cortex proposed specific optimization principles ad hoc. Here, we systematically analyze how optimization principles in a general class of models impact on the spatial layout of visual cortical maps. For each considered optimization principle we identify the corresponding optima and analyze their spatial layout. This directly demonstrates that by studying map layout and geometric inter-map correlations one can substantially constrain the underlying optimization principle. In particular, we study whether such optimization principles can lead to spatially complex patterns and to geometric correlations among cortical maps as observed in imaging experiments.
In the juvenile brain, the synaptic architecture of the visual cortex remains in a state of flux for months after the natural onset of vision and the initial emergence of feature selectivity in visual cortical neurons. It is an attractive hypothesis that visual cortical architecture is shaped during this extended period of juvenile plasticity by the coordinated optimization of multiple visual cortical maps such as orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we introduced a class of analytically tractable coordinated optimization models and solved representative examples, in which a spatially complex organization of the OP map is induced by interactions between the maps. We found that these solutions near symmetry breaking threshold predict a highly ordered map layout. Here we examine the time course of the convergence towards attractor states and optima of these models. In particular, we determine the timescales on which map optimization takes place and how these timescales can be compared to those of visual cortical development and plasticity. We also assess whether our models exhibit biologically more realistic, spatially irregular solutions at a finite distance from threshold, when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. We show that, although maps typically undergo substantial rearrangement, no other solutions than pinwheel crystals and stripes dominate in the emerging layouts. Pinwheel crystallization takes place on a rather short timescale and can also occur for detuned wavelengths of different maps. Our numerical results thus support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the architecture of the visual cortex. We discuss several alternative scenarios that may improve the agreement between model solutions and biological observations.
Neurons in the visual cortex of carnivores, primates and their close relatives form spatial representations or maps of multiple stimulus features. In part (I) of this study we theoretically predicted maps that are optima of a variety of optimization principles. When analyzing the joint optimization of two interacting maps we showed that for different optimization principles the resulting optima show a stereotyped, spatially perfectly periodic layout. Experimental maps, however, are much more irregular. In particular, in case of orientation columns it was found that different species show apparently species invariant statistics of point defects, so-called pinwheels. In this paper, we numerically investigate whether the spatial features of the stereotyped optima described in part (I) are expressed on biologically relevant timescales and whether other, spatially irregular, long-living states emerge that better reproduce the experimentally observed statistical properties of orientation maps. Moreover, we explore whether the coordinated optimization of more than two maps can lead to spatially irregular optima.
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.
The response of a neuron to repeated somatic fluctuating current injections in vitro can elicit a reliable and precisely timed sequence of action potentials. The set of responses obtained across trials can also be interpreted as the response of an ensemble of similar neurons receiving the same input, with the precise spike times representing synchronous volleys that would be effective in driving postsynaptic neurons. To study the reproducibility of the output spike times for different conditions that might occur in vivo, we somatically injected aperiodic current waveforms into cortical neurons in vitro and systematically varied the amplitude and DC offset of the fluctuations. As the amplitude of the fluctuations was increased, reliability increased and the spike times remained stable over a wide range of values. However, at specific values called bifurcation points, large shifts in the spike times were obtained in response to small changes in the stimulus, resulting in multiple spike patterns that were revealed using an unsupervised classification method. Increasing the DC offset, which mimicked an overall increase in network background activity, also revealed bifurcation points and increased the reliability. Furthermore, the spike times shifted earlier with increasing offset. Although the reliability was reduced at bifurcation points, a theoretical analysis showed that the information about the stimulus time course was increased because each of the spike time patterns contained different information about the input.
Neurons respond with precise spike times to fluctuating current injections, leading to peaks in ensemble firing rate. The structure of these peaks, or spike events, provides a compact description of the neural response. We explore the consequences of precise spike times for neural coding in vivo, by investigating the spike event structure of virtual cell assemblies constructed in vitro. We incorporate diversity of electrophysiological response properties by varying the amplitude and offset of a common stimulus waveform injected in vitro. Across multiple trials, spike trains produce precise events in response to upswings in the stimulus, suggesting that such upswings in in vivo assemblies may effectively drive postsynaptic targets and transmit information about the stimulus. In simulations and in vitro, we identified bifurcations in the event structure as the amplitude or the offset was varied. Near bifurcation points, the neural response showed heightened sensitivity to intrinsic neural noise, leading to multiple competing response patterns, and enriching the representation of stimulus features by the ensemble output. The presence of bifurcations could therefore allow an ideal observer to extract more information about the stimulus. Our results suggest that event structure bifurcations may provide a mechanism for boosting information transmission in vivo.
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task.
Humans excel in rapid and accurate processing of visual scenes. However, it is unclear which computations allow the visual system to convert light hitting the retina into a coherent representation of visual input in a rapid and efficient way. Here we used simple, computer-generated image categories with similar low-level structure as natural scenes to test whether a model of early integration of low-level information can predict perceived category similarity. Specifically, we show that summarized (spatially pooled) responses of model neurons covering the entire visual field (the population response) to low-level properties of visual input (contrasts) can already be informative about differences in early visual evoked activity as well as behavioral confusions of these categories. These results suggest that low-level population responses can carry relevant information to estimate similarity of controlled images, and put forward the exciting hypothesis that the visual system may exploit these responses to rapidly process real natural scenes. We propose that the spatial pooling that allows for the extraction of this information may be a plausible first step in extracting scene gist to form a rapid impression of the visual input.
Neuroimaging research has largely focused on the identification of associations between brain activation and specific mental functions. Here we show that data mining techniques applied to a large database of neuroimaging results can be used to identify the conceptual structure of mental functions and their mapping to brain systems. This analysis confirms many current ideas regarding the neural organization of cognition, but also provides some new insights into the roles of particular brain systems in mental function. We further show that the same methods can be used to identify the relations between mental disorders. Finally, we show that these two approaches can be combined to empirically identify novel relations between mental disorders and mental functions via their common involvement of particular brain networks. This approach has the potential to discover novel endophenotypes for neuropsychiatric disorders and to better characterize the structure of these disorders and the relations between them.
One of the major challenges of neuroscience research is to integrate the results of the large number of published research studies in order to better understand how psychological functions are mapped onto brain systems. In this research, we take advantage of a large database of neuroimaging studies, along with text mining methods, to extract information about the topics that are found in the brain imaging literature and their mapping onto reported brain activation data. We also show that this method can be used to identify new relations between psychological functions and mental disorders, through their shared brain activity patterns. This work provides a new way to discover the underlying structure that relates brain function and mental processes.
Place cells in the hippocampus of higher mammals are critical for spatial navigation. Recent modeling clarifies how this may be achieved by how grid cells in the medial entorhinal cortex (MEC) input to place cells. Grid cells exhibit hexagonal grid firing patterns across space in multiple spatial scales along the MEC dorsoventral axis. Signals from grid cells of multiple scales combine adaptively to activate place cells that represent much larger spaces than grid cells. But how do grid cells learn to fire at multiple positions that form a hexagonal grid, and with spatial scales that increase along the dorsoventral axis? In vitro recordings of medial entorhinal layer II stellate cells have revealed subthreshold membrane potential oscillations (MPOs) whose temporal periods, and time constants of excitatory postsynaptic potentials (EPSPs), both increase along this axis. Slower (faster) subthreshold MPOs and slower (faster) EPSPs correlate with larger (smaller) grid spacings and field widths. A self-organizing map neural model explains how the anatomical gradient of grid spatial scales can be learned by cells that respond more slowly along the gradient to their inputs from stripe cells of multiple scales, which perform linear velocity path integration. The model cells also exhibit MPO frequencies that covary with their response rates. The gradient in intrinsic rhythmicity is thus not compelling evidence for oscillatory interference as a mechanism of grid cell firing. A response rate gradient combined with input stripe cells that have normalized receptive fields can reproduce all known spatial and temporal properties of grid cells along the MEC dorsoventral axis. This spatial gradient mechanism is homologous to a gradient mechanism for temporal learning in the lateral entorhinal cortex and its hippocampal projections. Spatial and temporal representations may hereby arise from homologous mechanisms, thereby embodying a mechanistic “neural relativity” that may clarify how episodic memories are learned.
Spatial navigation is a critical competence of all higher mammals, and place cells in the hippocampus represent the large spaces in which they navigate. Recent modeling clarifies how this may occur via interactions between grid cells in the medial entorhinal cortex (MEC) and place cells. Grid cells exhibit hexagonal grid firing patterns across space and come in multiple spatial scales that increase along the dorsoventral axis of MEC. Signals from multiple scales of grid cells combine to activate place cells that represent much larger spaces than grid cells. This article shows how a gradient of cell response rates along the dorsoventral axis enables the learning of grid cells with the observed gradient of spatial scales as an animal navigates realistic trajectories. The observed gradient of grid cell membrane potential oscillation frequencies is shown to be a direct result of the gradient of response rates. This gradient mechanism for spatial learning is homologous to a gradient mechanism for temporal learning in the lateral entorhinal cortex and its hippocampal projections, thereby clarifying why both spatial and temporal representations are found in the entorhinal-hippocampal system.