As a person learns a new skill, distinct synapses, brain regions, and circuits are engaged and change over time. In this paper, we develop methods to examine patterns of correlated activity across a large set of brain regions. Our goal is to identify properties that enable robust learning of a motor skill. We measure brain activity during motor sequencing and characterize network properties based on coherent activity between brain regions. Using recently developed algorithms to detect time-evolving communities, we find that the complex reconfiguration patterns of the brain's putative functional modules that control learning can be described parsimoniously by the combined presence of a relatively stiff temporal core that is composed primarily of sensorimotor and visual regions whose connectivity changes little in time and a flexible temporal periphery that is composed primarily of multimodal association regions whose connectivity changes frequently. The separation between temporal core and periphery changes over the course of training and, importantly, is a good predictor of individual differences in learning success. The core of dynamically stiff regions exhibits dense connectivity, which is consistent with notions of core-periphery organization established previously in social networks. Our results demonstrate that core-periphery organization provides an insightful way to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
When someone learns a new skill, his/her brain dynamically alters individual synapses, regional activity, and larger-scale circuits. In this paper, we capture some of these dynamics by measuring and characterizing patterns of coherent brain activity during the learning of a motor skill. We extract time-evolving communities from these patterns and find that a temporal core that is composed primarily of primary sensorimotor and visual regions reconfigures little over time, whereas a periphery that is composed primarily of multimodal association regions reconfigures frequently. The core consists of densely connected nodes, and the periphery consists of sparsely connected nodes. Individual participants with a larger separation between core and periphery learn better in subsequent training sessions than individuals with a smaller separation. Conceptually, core-periphery organization provides a framework in which to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
The transcriptome of the brain changes during development, reflecting processes that determine functional specialization of brain regions. We analyzed gene expression, measured using in situ hybridization across the full developing mouse brain, to quantify functional specialization of brain regions. Surprisingly, we found that during the time that the brain becomes anatomically regionalized in early development, transcription specialization actually decreases reaching a low, “neurotypic”, point around birth. This decrease of specialization is brain-wide, and mainly due to biological processes involved in constructing brain circuitry. Regional specialization rises again during post-natal development. This effect is largely due to specialization of plasticity and neural activity processes. Post-natal specialization is particularly significant in the cerebellum, whose expression signature becomes increasingly different from other brain regions. When comparing mouse and human expression patterns, the cerebellar post-natal specialization is also observed in human, but the regionalization of expression in the human Thalamus and Cortex follows a strikingly different profile than in mouse.
Brain development is one of the most complex biological processes, orchestrated by the precisely timed and coordinated expression of thousands of genes. As the brain develops, specific regions are formed, their structure and function reflected in unique sets of expressed genes. Regional gene expression profiles determine the basic properties of neural systems: controlling how the brain develops from embryos to adults, maintaining the well being of the system, adapting the brain following experience and carrying out specific regional functions. Here we investigate the temporal dynamics of changes in regional gene expression patterns throughout mouse brain development. We identify a neurotypic phase around the time of birth, in which patterns of gene expression become more homogeneous across the brain, creating an ‘hourglass’ shaped expression divergence profile. We characterize the biological processes, genes and brain regions responsible for this pattern, and also compare mouse neurodevelopmental expression patterns with parallel data from human, finding striking similarities and differences between the two species.
Humans interact with the environment through sensory and motor acts. Some of these interactions require synchronization among two or more individuals. Multiple-trial designs, which we have used in past work to study interbrain synchronization in the course of joint action, constrain the range of observable interactions. To overcome the limitations of multiple-trial designs, we conducted single-trial analyses of electroencephalography (EEG) signals recorded from eight pairs of guitarists engaged in musical improvisation. We identified hyper-brain networks based on a complex interplay of different frequencies. The intra-brain connections primarily involved higher frequencies (e.g., beta), whereas inter-brain connections primarily operated at lower frequencies (e.g., delta and theta). The topology of hyper-brain networks was frequency-dependent, with a tendency to become more regular at higher frequencies. We also found hyper-brain modules that included nodes (i.e., EEG electrodes) from both brains. Some of the observed network properties were related to musical roles during improvisation. Our findings replicate and extend earlier work and point to mechanisms that enable individuals to engage in temporally coordinated joint action.
Recent research has demonstrated the feasibility of combining functional near-infrared spectroscopy (fNIRS) and graph theory approaches to explore the topological attributes of human brain networks. However, the test-retest (TRT) reliability of the application of graph metrics to these networks remains to be elucidated. Here, we used resting-state fNIRS and a graph-theoretical approach to systematically address TRT reliability as it applies to various features of human brain networks, including functional connectivity, global network metrics and regional nodal centrality metrics. Eighteen subjects participated in two resting-state fNIRS scan sessions held ∼20 min apart. Functional brain networks were constructed for each subject by computing temporal correlations on three types of hemoglobin concentration information (HbO, HbR, and HbT). This was followed by a graph-theoretical analysis, and then an intraclass correlation coefficient (ICC) was further applied to quantify the TRT reliability of each network metric. We observed that a large proportion of resting-state functional connections (∼90%) exhibited good reliability (0.6< ICC <0.74). For global and nodal measures, reliability was generally threshold-sensitive and varied among both network metrics and hemoglobin concentration signals. Specifically, the majority of global metrics exhibited fair to excellent reliability, with notably higher ICC values for the clustering coefficient (HbO: 0.76; HbR: 0.78; HbT: 0.53) and global efficiency (HbO: 0.76; HbR: 0.70; HbT: 0.78). Similarly, both nodal degree and efficiency measures also showed fair to excellent reliability across nodes (degree: 0.52∼0.84; efficiency: 0.50∼0.84); reliability was concordant across HbO, HbR and HbT and was significantly higher than that of nodal betweenness (0.28∼0.68). Together, our results suggest that most graph-theoretical network metrics derived from fNIRS are TRT reliable and can be used effectively for brain network research. This study also provides important guidance on the choice of network metrics of interest for future applied research in developmental and clinical neuroscience.
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons.
In many areas of computational biology, including the analyses of genetic mutations, protein stability and neural coding, as well as in economics, one of the most basic and important steps of data analysis is to find the relevant input dimensions for a particular task. In neural coding problems, the spike-triggered covariance (STC) method identifies relevant input dimensions by comparing the variance of the input distribution along different dimensions to the variance of inputs that elicited a neural response. While in theory the method can be applied to Gaussian stimuli with or without correlations, it has so far been used in studies with only weakly correlated stimuli. Here we show that to use STC with strongly correlated, -type inputs, one has to take into account that the covariance matrix of random samples from this distribution has a complex structure, with one or more outstanding modes. We use simulations on model neurons as well as an analysis of the responses of retinal neurons to demonstrate that taking the presence of these outstanding modes into account improves the sensitivity of the STC method by more than an order of magnitude.
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.
The popular sparse coding theory posits that the receptive fields of visual cortical neurons maximize the efficiency of the neural representation of natural images. Models implementing this idea typically minimize a combination of the error in reconstructing natural images from neural activities, and the average level of activity in the model neurons. In simulations, these models are presented with natural images and the RFs then develop so as to increase representation efficiency. After a long developmental period, the model RFs typically agree well with those observed experimentally in visual cortex. Since the models seek to minimize (for a given level of reconstruction error) the neural activity levels, the average levels of neural activity might be expected to decrease as the models develop. In the developing mammalian cortex, visual RFs are also modified during development, so the sparse coding hypothesis might appear to suggest that activity levels should decrease during development. Recent experiments with young ferrets show the opposite trend: mature animals tend to have more active visual cortices. Herein, we demonstrate that, depending on the models' initial conditions, some sparse coding models can exhibit increasing activity levels while learning the same types of RFs that are observed in visual cortex: the developmental data do not preclude sparse coding.
Hyperexcited states, including depolarization block and depolarized low amplitude membrane oscillations (DLAMOs), have been observed in neurons of the suprachiasmatic nuclei (SCN), the site of the central mammalian circadian (∼24-hour) clock. The causes and consequences of this hyperexcitation have not yet been determined. Here, we explore how individual ionic currents contribute to these hyperexcited states, and how hyperexcitation can then influence molecular circadian timekeeping within SCN neurons. We developed a mathematical model of the electrical activity of SCN neurons, and experimentally verified its prediction that DLAMOs depend on post-synaptic L-type calcium current. The model predicts that hyperexcited states cause high intracellular calcium concentrations, which could trigger transcription of clock genes. The model also predicts that circadian control of certain ionic currents can induce hyperexcited states. Putting it all together into an integrative model, we show how membrane potential and calcium concentration provide a fast feedback that can enhance rhythmicity of the intracellular circadian clock. This work puts forward a novel role for electrical activity in circadian timekeeping, and suggests that hyperexcited states provide a general mechanism for linking membrane electrical dynamics to transcription activation in the nucleus.
Daily rhythms in the behavior and physiology of mammals are coordinated by a group of neurons that constitute the central circadian (∼24-hour) clock. Clock neurons contain molecular feedback loops that lead to rhythmic expression of clock-related genes. Much progress has been made in the past two decades to understand the genetic basis of the molecular circadian clock. However, the relationship between the molecular clock and the primary output of clock neurons—their electrical activity—remains unclear. Here, we explore this relationship using computational modeling of an unusual electrical state that clock neurons enter at a certain time of day. We predict that this state causes high concentration of calcium ions inside clock neurons, which activates transcription of clock genes. We demonstrate that this additional feedback promotes 24-hour gene expression rhythms. Thus, we propose that electrical activity is not just an output of the clock, but also part of the core circadian timekeeping mechanism that plays an important role in health and disease.
Current models of embryological development focus on intracellular processes such as gene expression and protein networks, rather than on the complex relationship between subcellular processes and the collective cellular organization these processes support. We have explored this collective behavior in the context of neocortical development, by modeling the expansion of a small number of progenitor cells into a laminated cortex with layer and cell type specific projections. The developmental process is steered by a formal language analogous to genomic instructions, and takes place in a physically realistic three-dimensional environment. A common genome inserted into individual cells control their individual behaviors, and thereby gives rise to collective developmental sequences in a biologically plausible manner. The simulation begins with a single progenitor cell containing the artificial genome. This progenitor then gives rise through a lineage of offspring to distinct populations of neuronal precursors that migrate to form the cortical laminae. The precursors differentiate by extending dendrites and axons, which reproduce the experimentally determined branching patterns of a number of different neuronal cell types observed in the cat visual cortex. This result is the first comprehensive demonstration of the principles of self-construction whereby the cortical architecture develops. In addition, our model makes several testable predictions concerning cell migration and branching mechanisms.
The proper operation of the brain depends on the correct developmental wiring of billions of neurons. Understanding this process of living self-construction is crucial not only for biological explanation and medical therapy, but could also provide an entirely new approach to industrial fabrication. We are approaching this problem through detailed simulation of cortical development. We have previously presented a software package that allows for simulation of cellular growth in a 3D space that respects physical forces and diffusion of substances, as well as an instruction language for specifying biologically plausible ‘genetic codes’. Here we apply this novel formalism to understanding the principles of cortical development in the context of multiple, spatially distributed agents that communicate only by local metabolic messages.
Understanding how seizures spread throughout the brain is an important problem in the treatment of epilepsy, especially for implantable devices that aim to avert focal seizures before they spread to, and overwhelm, the rest of the brain. This paper presents an analysis of the speed of propagation in a computational model of seizure-like activity in a 2-dimensional recurrent network of integrate-and-fire neurons containing both excitatory and inhibitory populations and having a difference of Gaussians connectivity structure, an approximation to that observed in cerebral cortex. In the same computational model network, alternative mechanisms are explored in order to simulate the range of seizure-like activity propagation speeds (0.1–100 mm/s) observed in two animal-slice-based models of epilepsy: (1) low extracellular , which creates excess excitation and (2) introduction of gamma-aminobutyric acid (GABA) antagonists, which reduce inhibition. Moreover, two alternative connection topologies are considered: excitation broader than inhibition, and inhibition broader than excitation. It was found that the empirically observed range of propagation velocities can be obtained for both connection topologies. For the case of the GABA antagonist model simulation, consistent with other studies, it was found that there is an effective threshold in the degree of inhibition below which waves begin to propagate. For the case of the low extracellular model simulation, it was found that activity-dependent reductions in inhibition provide a potential explanation for the emergence of slowly propagating waves. This was simulated as a depression of inhibitory synapses, but it may also be achieved by other mechanisms. This work provides a localised network understanding of the propagation of seizures in 2-dimensional centre-surround networks that can be tested empirically.
In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components—like genetic circuits, biochemical cascades, and ion channels, among others—enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode—with almost 20–60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.
How does the brain integrate multiple sources of information to support normal sensorimotor and cognitive functions? To investigate this question we present an overall brain architecture (called “the dual intertwined rings architecture”) that relates the functional specialization of cortical networks to their spatial distribution over the cerebral cortex (or “corticotopy”). Recent results suggest that the resting state networks (RSNs) are organized into two large families: 1) a sensorimotor family that includes visual, somatic, and auditory areas and 2) a large association family that comprises parietal, temporal, and frontal regions and also includes the default mode network. We used two large databases of resting state fMRI data, from which we extracted 32 robust RSNs. We estimated: (1) the RSN functional roles by using a projection of the results on task based networks (TBNs) as referenced in large databases of fMRI activation studies; and (2) relationship of the RSNs with the Brodmann Areas. In both classifications, the 32 RSNs are organized into a remarkable architecture of two intertwined rings per hemisphere and so four rings linked by homotopic connections. The first ring forms a continuous ensemble and includes visual, somatic, and auditory cortices, with interspersed bimodal cortices (auditory-visual, visual-somatic and auditory-somatic, abbreviated as VSA ring). The second ring integrates distant parietal, temporal and frontal regions (PTF ring) through a network of association fiber tracts which closes the ring anatomically and ensures a functional continuity within the ring. The PTF ring relates association cortices specialized in attention, language and working memory, to the networks involved in motivation and biological regulation and rhythms. This “dual intertwined architecture” suggests a dual integrative process: the VSA ring performs fast real-time multimodal integration of sensorimotor information whereas the PTF ring performs multi-temporal integration (i.e., relates past, present, and future representations at different temporal scales).
Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate models and Hodgkin-Huxley conductance based models.
Neural networks in the brain can classify objects as being the same thing regardless of the stimulus intensity, which is referred to as gain control. This intensity invariance occurs during pattern recognition in any sensory modality. We evaluate whether it is possible to design stable neural circuits made of excitatory and inhibitory neurons that are capable of controlling the internal representation of a stimulus using network properties alone. Gain control is important because if the activity gets out of control neurons can die or be damaged by hyper-excitation. It is known that one can control the internal representation by the saturating responses of neurons. However, we show that there also is a precise relationship of network parameters that can account for gain control regardless of the external stimulus without such saturation. The most important network parameters are the connections from the inhibitory population to the rest of the network. This is consistent with experimental findings. We also show that the connections from the excitatory to the inhibitory population do not play an important role in gain control, suggesting that they can be freed for encoding purposes without leaving the operating range of the network when levels of stimulation increase.
Despite its century-old use, the interpretation of local field potentials (LFPs), the low-frequency part of electrical signals recorded in the brain, is still debated. In cortex the LFP appears to mainly stem from transmembrane neuronal currents following synaptic input, and obvious questions regarding the ‘locality’ of the LFP are: What is the size of the signal-generating region, i.e., the spatial reach, around a recording contact? How far does the LFP signal extend outside a synaptically activated neuronal population? And how do the answers depend on the temporal frequency of the LFP signal? Experimental inquiries have given conflicting results, and we here pursue a modeling approach based on a well-established biophysical forward-modeling scheme incorporating detailed reconstructed neuronal morphologies in precise calculations of population LFPs including thousands of neurons. The two key factors determining the frequency dependence of LFP are the spatial decay of the single-neuron LFP contribution and the conversion of synaptic input correlations into correlations between single-neuron LFP contributions. Both factors are seen to give low-pass filtering of the LFP signal power. For uncorrelated input only the first factor is relevant, and here a modest reduction (<50%) in the spatial reach is observed for higher frequencies (>100 Hz) compared to the near-DC () value of about . Much larger frequency-dependent effects are seen when populations of pyramidal neurons receive correlated and spatially asymmetric inputs: the low-frequency () LFP power can here be an order of magnitude or more larger than at 60 Hz. Moreover, the low-frequency LFP components have larger spatial reach and extend further outside the active population than high-frequency components. Further, the spatial LFP profiles for such populations typically span the full vertical extent of the dendrites of neurons in the population. Our numerical findings are backed up by an intuitive simplified model for the generation of population LFP.
The first recording of electrical potential from brain activity was reported already in 1875, but still the interpretation of the signal is debated. To take full advantage of the new generation of microelectrodes with hundreds or even thousands of electrode contacts, an accurate quantitative link between what is measured and the underlying neural circuit activity is needed. Here we address the question of how the observed frequency dependence of recorded local field potentials (LFPs) should be interpreted. By use of a well-established biophysical modeling scheme, combined with detailed reconstructed neuronal morphologies, we find that correlations in the synaptic inputs onto a population of pyramidal cells may significantly boost the low-frequency components and affect the spatial profile of the generated LFP. We further find that these low-frequency components may be less ‘local’ than the high-frequency LFP components in the sense that (1) the size of signal-generation region of the LFP recorded at an electrode is larger and (2) the LFP generated by a synaptically activated population spreads further outside the population edge due to volume conduction.
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.
To appreciate how neural circuits control behaviors, we must understand two things. First, how the neurons comprising the circuit are connected, and second, how neurons and their connections change after learning or in response to neuromodulators. Neuronal connectivity is difficult to determine experimentally, whereas neuronal activity can often be readily measured. We describe a statistical model to estimate circuit connectivity directly from measured activity patterns. We use the timing relationships between observed spikes to predict synaptic interactions between simultaneously observed neurons. The model estimate provides each predicted connection with a curve that represents how strongly, and at which temporal delays, one circuit element effectively influences another. These curves are analogous to synaptic interactions of the level of the membrane potential of biological neurons and share some of their features such as being inhibitory or excitatory. We test our method on recordings from the pyloric circuit in the crab stomatogastric ganglion, a small circuit whose connectivity is completely known beforehand, and find that the predicted circuit matches the biological one — a result other techniques failed to achieve. In addition, we show that drug manipulations impacting the circuit are revealed by this technique. These results illustrate the utility of our analysis approach for inferring connections from neural spiking activity.
During the development of the topographic map from vertebrate retina to superior colliculus (SC), EphA receptors are expressed in a gradient along the nasotemporal retinal axis. Their ligands, ephrin-As, are expressed in a gradient along the rostrocaudal axis of the SC. Countergradients of ephrin-As in the retina and EphAs in the SC are also expressed. Disruption of any of these gradients leads to mapping errors. Gierer's (1981) model, which uses well-matched pairs of gradients and countergradients to establish the mapping, can account for the formation of wild type maps, but not the double maps found in EphA knock-in experiments. I show that these maps can be explained by models, such as Gierer's (1983), which have gradients and no countergradients, together with a powerful compensatory mechanism that helps to distribute connections evenly over the target region. However, this type of model cannot explain mapping errors found when the countergradients are knocked out partially. I examine the relative importance of countergradients as against compensatory mechanisms by generalising Gierer's (1983) model so that the strength of compensation is adjustable. Either matching gradients and countergradients alone or poorly matching gradients and countergradients together with a strong compensatory mechanism are sufficient to establish an ordered mapping. With a weaker compensatory mechanism, gradients without countergradients lead to a poorer map, but the addition of countergradients improves the mapping. This model produces the double maps in simulated EphA knock-in experiments and a map consistent with the Math5 knock-out phenotype. Simulations of a set of phenotypes from the literature substantiate the finding that countergradients and compensation can be traded off against each other to give similar maps. I conclude that a successful model of retinotopy should contain countergradients and some form of compensation mechanism, but not in the strong form put forward by Gierer.
Relational concepts play a central role in human perception and cognition, but little is known about how they are acquired. For example, how do we come to understand that physical force is a higher-order multiplicative relation between mass and acceleration, or that two circles are the same-shape in the same way that two squares are? A recent model of relational learning, DORA (Discovery of Relations by Analogy; Doumas, Hummel & Sandhofer, 2008), predicts that comparison and analogical mapping play a central role in the discovery and predication of novel higher-order relations. We report two experiments testing and confirming this prediction.
Small-World Networks (SWNs) represent a fundamental model for the comprehension of many complex man-made and biological networks. In the central nervous system, SWN models have been shown to fit well both anatomical and functional maps at the macroscopic level. However, the functional microscopic level, where the nodes of a network are represented by single neurons, is still poorly understood. At this level, although recent evidences suggest that functional connection graphs exhibit small-world organization, it is not known whether and how these maps, potentially distributed in multiple brain regions, change across different conditions, such as spontaneous and stimulus-evoked activities. We addressed these questions by analyzing the data from simultaneous multi-array extracellular recordings in three brain regions of rats, diversely involved in somatosensory information processing: the ventropostero-lateral thalamic nuclei, the primary somatosensory cortex and the centro-median thalamic nuclei. From both spike and Local Field Potential (LFP) recordings, we estimated the functional connection graphs by using the Normalized Compression Similarity for spikes and the Phase Synchrony for LFPs. Then, by using graph-theoretical statistics, we characterized the functional topology both during spontaneous activity and sensory stimulation. Our main results show that: (i) spikes and LFPs show SWN organization during spontaneous activity; (ii) after stimulation onset, while substantial functional graph reconfigurations occur both in spike and LFPs, small-worldness is nonetheless preserved; (iii) the stimulus triggers a significant increase of inter-area LFP connections without modifying the topology of intra-area functional connections. Finally, investigating computationally the functional substrate that supports the observed phenomena, we found that (iv) the fundamental concept of cell assemblies, transient groups of activating neurons, can be described by small-world networks. Our results suggest that activity of neurons from multiple areas of the rat somatosensory system contributes to the integration of local computations arisen in distributed functional cell assemblies according to the principles of SWNs.
Cell assemblies (sequences of neuronal activations), seem to represent a functional unit of information processing. However, it remains unclear how groups of neurons may organize their activity during information processing, working as a sole functional unit. One prominent principle in complex network theory is covered by small-world networks, in which each node is easily reachable by each other and organized in highly dense clusters. Small-world networks have been already observed on large scales in human and primate brain areas while their presence at the neuronal level remains unclear. The aim of this work was to investigate the possibility that functional, related neural populations, encompassing multiple brain regions, could be organized in small-world networks. We investigated the coherent neuronal activity among multiple rat brain regions involved in somatosensory information processing. We found that the recorded neuronal populations represented small-world networks and that these topologies were maintained during stimulations. Furthermore, by using simulations to explore the hidden substrates supporting the observed topological features, we inferred that small-world networks represent a plausible topology for cell assemblies. This work suggests that the coherent activity of neurons from multiple brain areas promotes the integration of local computations, the functional principle of small-world networks.
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex.
The statistics of our visual world is dominated by occlusions. Almost every image processed by our brain consists of mutually occluding objects, animals and plants. Our visual cortex is optimized through evolution and throughout our lifespan for such stimuli. Yet, the standard computational models of primary visual processing do not consider occlusions. In this study, we ask what effects visual occlusions may have on predicted response properties of simple cells which are the first cortical processing units for images. Our results suggest that recently observed differences between experiments and predictions of the standard simple cell models can be attributed to occlusions. The most significant consequence of occlusions is the prediction of many cells sensitive to center-surround stimuli. Experimentally, large quantities of such cells are observed since new techniques (reverse correlation) are used. Without occlusions, they are only obtained for specific settings and none of the seminal studies (sparse coding, ICA) predicted such fields. In contrast, the new type of response naturally emerges as soon as occlusions are considered. In comparison with recent in vivo experiments we find that occlusive models are consistent with the high percentages of center-surround simple cells observed in macaque monkeys, ferrets and mice.
The learning mechanism in the hippocampus has almost universally been assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of memories later. However, it is also widely known that Hebbian learning mechanisms impose significant capacity constraints, and are generally less computationally powerful than learning mechanisms that take advantage of error signals. We show that the differential phase relationships of hippocampal subfields within the overall theta rhythm enable a powerful form of error-driven learning, which results in significantly greater capacity, as shown in computer simulations. In one phase of the theta cycle, the bidirectional connectivity between CA1 and entorhinal cortex can be trained in an error-driven fashion to learn to effectively encode the cortical inputs in a compact and sparse form over CA1. In a subsequent portion of the theta cycle, the system attempts to recall an existing memory, via the pathway from entorhinal cortex to CA3 and CA1. Finally the full theta cycle completes when a strong target encoding representation of the current input is imposed onto the CA1 via direct projections from entorhinal cortex. The difference between this target encoding and the attempted recall of the same representation on CA1 constitutes an error signal that can drive the learning of CA3 to CA1 synapses. This CA3 to CA1 pathway is critical for enabling full reinstatement of recalled hippocampal memories out in cortex. Taken together, these new learning dynamics enable a much more robust, high-capacity model of hippocampal learning than was available previously under the classical Hebbian model.
We present a novel hippocampal model based on the oscillatory dynamics of the theta rhythm, which enables the network to learn much more efficiently than the Hebbian form of learning that is widely assumed in most models. Specifically, two pathways, Tri-Synaptic and Mono-Synaptic, alternate in strength during theta oscillations to provide an alternation of encoding vs. recall bias in area CA1. The difference between these two states and the unaltered cortical input representation creates an error signal, which can drive powerful error-driven learning in both Tri-Synaptic and Mono-Synaptic pathways. Furthermore, the presence of these alternating modes of network behavior (encoding vs. recall) provide an intriguing target for future work examining how prefrontal control mechanisms can manipulate the behavior of the hippocampus.
Psychogenic non-epileptic seizures (PNES) are paroxysmal behaviors that resemble epileptic seizures but lack abnormal electrical activity. Recent studies suggest aberrant functional connectivity involving specific brain regions in PNES. Little is known, however, about alterations of topological organization of whole-brain functional and structural connectivity networks in PNES. We constructed functional connectivity networks from resting-state functional MRI signal correlations and structural connectivity networks from diffusion tensor imaging tractography in 17 PNES patients and 20 healthy controls. Graph theoretical analysis was employed to compute network properties. Moreover, we investigated the relationship between functional and structural connectivity networks. We found that PNES patients exhibited altered small-worldness in both functional and structural networks and shifted towards a more regular (lattice-like) organization, which could serve as a potential imaging biomarker for PNES. In addition, many regional characteristics were altered in structural connectivity network, involving attention, sensorimotor, subcortical and default-mode networks. These regions with altered nodal characteristics likely reflect disease-specific pathophysiology in PNES. Importantly, the coupling strength of functional-structural connectivity was decreased and exhibited high sensitivity and specificity to differentiate PNES patients from healthy controls, suggesting that the decoupling strength of functional-structural connectivity might be an important characteristic reflecting the mechanisms of PNES. This is the first study to explore the altered topological organization in PNES combining functional and structural connectivity networks, providing a new way to understand the pathophysiological mechanisms of PNES.
In this paper, we describe a distributed coordination system that allows agents to seamlessly cooperate in problem solving by partially contributing to a problem solution and delegating the subproblems for which they do not have the required skills or knowledge to appropriate agents. The coordination mechanism relies on a dynamically built semantic overlay network that allows the agents to efficiently locate, even in very large unstructured networks, the necessary skills for a specific problem. Each agent performs partial contributions to the problem solution using a new distributed goal-directed version of the Graphplan algorithm. This new goal-directed version of the original Graphplan algorithm provides an efficient solution to the problem of "distraction", which most forward-chaining algorithms suffer from. We also discuss a set of heuristics to be used in the backward-search process of the planning algorithm in order to distribute this process amongst idle agents in an attempt to find a solution in less time. The evaluation results show that our approach is effective in building a scalable and efficient agent society capable of solving complex distributable problems.
Autism is a complex developmental disability that characterized by deficits in social interaction, language skills, repetitive stereotyped behaviors and restricted interests. Although great heterogeneity exists, previous findings suggest that autism has atypical brain connectivity patterns and disrupted small-world network properties. However, the organizational alterations in the autistic brain network are still poorly understood. We explored possible organizational alterations of 49 autistic children and 51 typically developing controls, by investigating their brain network metrics that are constructed upon cortical thickness correlations. Three modules were identified in controls, including cortical regions associated with brain functions of executive strategic, spatial/auditory/visual, and self-reference/episodic memory. There are also three modules found in autistic children with similar patterns. Compared with controls, autism demonstrates significantly reduced gross network modularity, and a larger number of inter-module connections. However, the autistic brain network demonstrates increased intra- and inter-module connectivity in brain regions including middle frontal gyrus, inferior parietal gyrus, and cingulate, suggesting one underlying compensatory mechanism associated with brain functions of self-reference and episodic memory. Results also show that there is increased correlation strength between regions inside frontal lobe, as well as impaired correlation strength between frontotemporal and frontoparietal regions. This alteration of correlation strength may contribute to the organization alteration of network structures in autistic brains.
Functional neuroimaging often generates large amounts of data on regions of interest. Such data can be addressed effectively with a widely-used statistical technique based on measurement theory that has not yet been applied to neuroimaging. Confirmatory factor analysis is a convenient hypothesis-driven modeling environment that can be used to conduct formal statistical tests comparing alternative hypotheses regarding the elements of putative neuronal networks. In such models, measures of each activated region of interest are treated as indicators of an underlying latent construct that represents the contemporaneous activation of the elements in the network. As such, confirmatory factor analysis focuses analyses on the activation of hypothesized networks as a whole, improves statistical power by modeling measurement error, and provides a theory-based approach to data reduction with a robust statistical basis. This approach is illustrated using data on seven regions of interest in a hypothesized mesocorticostriatal reward system in a sample of 262 adult volunteers assessed during a card-guessing reward task. A latent construct reflecting contemporaneous activation of the reward system was found to be significantly associated with a latent construct measuring impulsivity, particularly in males.
confirmatory factor analysis; functional neuroimaging; measurement theory; mesocorticostriatal reward system; impulsivity; sex differences