The balance between maintenance of the stem cell state and terminal differentiation is influenced by the cellular environment. The switching between these states has long been understood as a transition between attractor states of a molecular network. Herein, stochastic fluctuations are either suppressed or can trigger the transition, but they do not actually determine the attractor states.
We present a novel mathematical concept in which stem cell and progenitor population dynamics are described as a probabilistic process that arises from cell proliferation and small fluctuations in the state of differentiation. These state fluctuations reflect random transitions between different activation patterns of the underlying regulatory network. Importantly, the associated noise amplitudes are state-dependent and set by the environment. Their variability determines the attractor states, and thus actually governs population dynamics. This model quantitatively reproduces the observed dynamics of differentiation and dedifferentiation in promyelocytic precursor cells.
Consequently, state-specific noise modulation by external signals can be instrumental in controlling stem cell and progenitor population dynamics. We propose follow-up experiments for quantifying the imprinting influence of the environment on cellular noise regulation.
Representing and analyzing complex networks remains a roadblock to creating dynamic network models of biological processes and pathways. The study of cell fate transitions can reveal much about the transcriptional regulatory programs that underlie these phenotypic changes and give rise to the coordinated patterns in expression changes that we observe. The application of gene expression state space trajectories to capture cell fate transitions at the genome-wide level is one approach currently used in the literature. In this paper, we analyze the gene expression dataset of Huang et al. (2005) which follows the differentiation of promyelocytes into neutrophil-like cells in the presence of inducers dimethyl sulfoxide and all-trans retinoic acid. Huang et al. (2005) build on the work of Kauffman (2004) who raised the attractor hypothesis, stating that cells exist in an expression landscape and their expression trajectories converge towards attractive sites in this landscape. We propose an alternative interpretation that explains this convergent behavior by recognizing that there are two types of processes participating in these cell fate transitions—core processes that include the specific differentiation pathways of promyelocytes to neutrophils, and transient processes that capture those pathways and responses specific to the inducer. Using functional enrichment analyses, specific biological examples and an analysis of the trajectories and their core and transient components we provide a validation of our hypothesis using the Huang et al. (2005) dataset.
Understanding how cells differentiate from one state to another is a fundamental problem in biology with implications for better understanding evolution, the development of complex organisms from a single fertilized egg, and the etiology of human disease. One way to view these processes is to examine cells as “complex adaptive systems” where the state of all genes in a cell (more than 20,000 genes) determines that cell's “state” at a given point in time. In this view, differentiating cells move along a path in “state space” from one stable “attractor” to another. In a 2005 paper, Sui Huang and colleagues presented an experimental model in which they claimed to have evidence for such attractors and for the transitions between them. The problem with this approach is that although it is intuitively appealing, it lacks predictive power. Reanalyzing Huang's data, we demonstrate that there is an alternative interpretation that still allows for a state space description but which has greater ability to make testable predictions. Specifically, we show that these abstract state space trajectories can be mapped onto more well-known pathways and represented as a “core” differentiation pathway and “transient” processes that capture the effects of the treatments that initiate differentiation.
Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems.
The software package is available at https://sites.google.com/site/desheng619/download.
Stem cell differentiation and the maintenance of self-renewal are intrinsically complex processes requiring the coordinated dynamic expression of hundreds of genes and proteins in precise response to external signalling cues. Numerous recent reports have used both experimental and computational techniques to dissect this complexity. These reports suggest that the control of cell fate has both deterministic and stochastic elements: complex underlying regulatory networks define stable molecular ‘attractor’ states towards which individual cells are drawn over time, whereas stochastic fluctuations in gene and protein expression levels drive transitions between coexisting attractors, ensuring robustness at the population level.
Cell fate decision remarkably generates specific cell differentiation path among the multiple possibilities that can arise through the complex interplay of high-dimensional genome activities. The coordinated action of thousands of genes to switch cell fate decision has indicated the existence of stable attractors guiding the process. However, origins of the intracellular mechanisms that create “cellular attractor” still remain unknown. Here, we examined the collective behavior of genome-wide expressions for neutrophil differentiation through two different stimuli, dimethyl sulfoxide (DMSO) and all-trans-retinoic acid (atRA). To overcome the difficulties of dealing with single gene expression noises, we grouped genes into ensembles and analyzed their expression dynamics in correlation space defined by Pearson correlation and mutual information. The standard deviation of correlation distributions of gene ensembles reduces when the ensemble size is increased following the inverse square root law, for both ensembles chosen randomly from whole genome and ranked according to expression variances across time. Choosing the ensemble size of 200 genes, we show the two probability distributions of correlations of randomly selected genes for atRA and DMSO responses overlapped after 48 hours, defining the neutrophil attractor. Next, tracking the ranked ensembles' trajectories, we noticed that only certain, not all, fall into the attractor in a fractal-like manner. The removal of these genome elements from the whole genomes, for both atRA and DMSO responses, destroys the attractor providing evidence for the existence of specific genome elements (named “genome vehicle”) responsible for the neutrophil attractor. Notably, within the genome vehicles, genes with low or moderate expression changes, which are often considered noisy and insignificant, are essential components for the creation of the neutrophil attractor. Further investigations along with our findings might provide a comprehensive mechanistic view of cell fate decision.
GATA1-PU.1 genetic switch is a paradigmatic genetic switch that governs the differentiation of progenitor cells into two different fates, erythroid and myeloid fates. In terms of dynamical model representation of these fates or lineages corresponds to stable attractor and choosing between the attractors. Small asymmetries and stochasticity intrinsically present in all genetic switches lead to the effect of delayed bifurcation which will change the differentiation result according to the timing of the process and affect the proportion of erythroid versus myeloid cells. We consider the differentiation bifurcation scenario in which there is a symmetry-breaking in the bifurcation diagrams as a result of asymmetry in external signaling. We show that the decision between two alternative cell fates in this structurally symmetric decision circuit can be biased depending on the speed at which the system is forced to go through the decision point. The parameter sweeping speed can also reduce the effect of asymmetry and produce symmetric choice between attractors, or convert the favorable attractor. This conversion may have important contributions to the immune system when the bias is in favor of the attractor which gives rise to non-immune cells.
GATA1-PU.1 switch; differentiation; immune cells; pluripotent cells
The developmental dynamics of multicellular organisms is a process that takes place in a multi-stable system in which each attractor state represents a cell type, and attractor transitions correspond to cell differentiation paths. This new understanding has revived the idea of a quasi-potential landscape, first proposed by Waddington as a metaphor. To describe development, one is interested in the ‘relative stabilities’ of N attractors (N > 2). Existing theories of state transition between local minima on some potential landscape deal with the exit part in the transition between two attractors in pair-attractor systems but do not offer the notion of a global potential function that relates more than two attractors to each other. Several ad hoc methods have been used in systems biology to compute a landscape in non-gradient systems, such as gene regulatory networks. Here we present an overview of currently available methods, discuss their limitations and propose a new decomposition of vector fields that permits the computation of a quasi-potential function that is equivalent to the Freidlin–Wentzell potential but is not limited to two attractors. Several examples of decomposition are given, and the significance of such a quasi-potential function is discussed.
multi-stable dynamical system; non-equilibrium dynamics; quasi-potential; state transition; epigenetic landscape; Freidlin–Wentzell theory
The gene regulatory circuit motif in which two opposing fate-determining transcription factors inhibit each other but activate themselves has been used in mathematical models of binary cell fate decisions in multipotent stem or progenitor cells. This simple circuit can generate multistability and explains the symmetric “poised” precursor state in which both factors are present in the cell at equal amounts as well as the resolution of this indeterminate state as the cell commits to either cell fate characterized by an asymmetric expression pattern of the two factors. This establishes the two alternative stable attractors that represent the two fate options. It has been debated whether cooperativity of molecular interactions is necessary to produce such multistability.
Here we take a general modeling approach and argue that this question is not relevant. We show that non-linearity can arise in two distinct models in which no explicit interaction between the two factors is assumed and that distinct chemical reaction kinetic formalisms can lead to the same (generic) dynamical system form. Moreover, we describe a novel type of bifurcation that produces a degenerate steady state that can explain the metastable state of indeterminacy prior to cell fate decision-making and is consistent with biological observations.
The general model presented here thus offers a novel principle for linking regulatory circuits with the state of indeterminacy characteristic of multipotent (stem) cells.
Cell lineage commitment and differentiation are governed by a complex gene regulatory network. Disruption of these processes by inappropriate regulatory signals and by mutational rewiring of the network can lead to tumorigenesis. Cancer cells often exhibit immature or embryonic traits and dysregulated developmental genes can act as oncogenes. However, the prevailing paradigm of somatic evolution and multi-step tumorigenesis, while useful in many instances, offers no logically coherent reason for why oncogenesis recapitulates ontogenesis. The formal concept of “cancer attractors”, derived from an integrative, complex systems approach to gene regulatory network may provide a natural explanation. Here we present the theory of attractors in gene network dynamics and review the concept of cell types as attractors. We argue that cancer cells are trapped in abnormal attractors and discuss this concept in the light of recent ideas in cancer biology, including cancer genomics and cancer stem cells, as well as the implications for differentiation therapy.
Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.
Cell assemblies; Attractor networks; Grammar; Language; Behaviour
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.
Imagine driving a car when you hear the sentence, “Take the next left.” Immediately, auditory “left” neurons begin to fire. But to use this information a few seconds later when you reach the junction, these neurons should persist in their firing after the auditory stimulus has been removed. This persistent activity, believed to be the basis for working memory, is maintained by a recurrent neural network in memory-related cortical areas. Previous studies showed that a network can maintain memories of “which” stimulus was observed. It has recently been shown that synapses between excitatory cells in the prefrontal cortex, where persistent activity is often observed, exhibit activity-dependent dynamics. Different forms of synaptic dynamics such as depression and facilitation were observed. In this work, we use a mathematical model of a recurrent neural network to analyze the effect of introducing dynamic synapses in the context of persistent activity. We find that the initiation of persistent firing can depend on the duration of the input. These results open the possibility that recurrent neural networks can encode not only “which” stimulus was observed, but also for “how long.”
In contrast to the classical view of development as a preprogrammed and deterministic process, recent studies have demonstrated that stochastic perturbations of highly non-linear systems may underlie the emergence and stability of biological patterns. Herein, we address the question of whether noise contributes to the generation of the stereotypical temporal pattern in gene expression during flower development. We modeled the regulatory network of organ identity genes in the Arabidopsis thaliana flower as a stochastic system. This network has previously been shown to converge to ten fixed-point attractors, each with gene expression arrays that characterize inflorescence cells and primordial cells of sepals, petals, stamens, and carpels. The network used is binary, and the logical rules that govern its dynamics are grounded in experimental evidence. We introduced different levels of uncertainty in the updating rules of the network. Interestingly, for a level of noise of around 0.5–10%, the system exhibited a sequence of transitions among attractors that mimics the sequence of gene activation configurations observed in real flowers. We also implemented the gene regulatory network as a continuous system using the Glass model of differential equations, that can be considered as a first approximation of kinetic-reaction equations, but which are not necessarily equivalent to the Boolean model. Interestingly, the Glass dynamics recover a temporal sequence of attractors, that is qualitatively similar, although not identical, to that obtained using the Boolean model. Thus, time ordering in the emergence of cell-fate patterns is not an artifact of synchronous updating in the Boolean model. Therefore, our model provides a novel explanation for the emergence and robustness of the ubiquitous temporal pattern of floral organ specification. It also constitutes a new approach to understanding morphogenesis, providing predictions on the population dynamics of cells with different genetic configurations during development.
Resting state networks (RSNs) show a surprisingly coherent and robust spatiotemporal organization. Previous theoretical studies demonstrated that these patterns can be understood as emergent on the basis of the underlying neuroanatomical connectivity skeleton. Integrating the biologically realistic DTI/DSI-(Diffusion Tensor Imaging/Diffusion Spectrum Imaging)based neuroanatomical connectivity into a brain model of Ising spin dynamics, we found a system with multiple attractors, which can be studied analytically. The multistable attractor landscape thus defines a functionally meaningful dynamic repertoire of the brain network that is inherently present in the neuroanatomical connectivity. We demonstrate that the more entropy of attractors exists, the richer is the dynamical repertoire and consequently the brain network displays more capabilities of computation. We hypothesize therefore that human brain connectivity developed a scale free type of architecture in order to be able to store a large number of different and flexibly accessible brain functions.
computational neuroscience; fMRI modeling; ongoing activity; resting state; connectivity matrix
The prefrontal cortex (PFC) plays a crucial role in flexible cognitive behavior by representing task relevant information with its working memory. The working memory with sustained neural activity is described as a neural dynamical system composed of multiple attractors, each attractor of which corresponds to an active state of a cell assembly, representing a fragment of information. Recent studies have revealed that the PFC not only represents multiple sets of information but also switches multiple representations and transforms a set of information to another set depending on a given task context. This representational switching between different sets of information is possibly generated endogenously by flexible network dynamics but details of underlying mechanisms are unclear. Here we propose a dynamically reorganizable attractor network model based on certain internal changes in synaptic connectivity, or short-term plasticity. We construct a network model based on a spiking neuron model with dynamical synapses, which can qualitatively reproduce experimentally demonstrated representational switching in the PFC when a monkey was performing a goal-oriented action-planning task. The model holds multiple sets of information that are required for action planning before and after representational switching by reconfiguration of functional cell assemblies. Furthermore, we analyzed population dynamics of this model with a mean field model and show that the changes in cell assemblies' configuration correspond to those in attractor structure that can be viewed as a bifurcation process of the dynamical system. This dynamical reorganization of a neural network could be a key to uncovering the mechanism of flexible information processing in the PFC.
The prefrontal cortex plays a highly flexible role in various cognitive tasks e.g., decision making and action planning. Neurons in the prefrontal cortex exhibit flexible representation or selectivity for task relevant information and are involved in working memory with sustained activity, which can be modeled as attractor dynamics. Moreover, recent experiments revealed that prefrontal neurons not only represent parametric or discrete sets of information but also switch the representation and transform a set of information to another set in order to match the context of the required task. However, underlying mechanisms of this flexible representational switching are unknown. Here we propose a dynamically reorganizable attractor network model in which short-term modulation of the synaptic connections reconfigures the structure of neural attractors by assembly and disassembly of a network of cells to produce flexible attractor dynamics. On the basis of computer simulation as well as theoretical analysis, we showed that this model reproduced experimentally demonstrated representational switching, and that switching on certain characteristic axes defining neural dynamics well describes the essence of the representational switching. This model has the potential to provide unique insights about the flexible information representations and processing in the cortical network.
Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics.
Tumor cells are considered to have an aberrant cell state, and some evidence indicates different development states appearing in the tumorigenesis. Embryonic development and stem cell differentiation are ordered processes in which the sequence of events over time is highly conserved. The "cancer attractor" concept integrates normal developmental processes and tumorigenesis into a high-dimensional "cell state space", and provides a reasonable explanation of the relationship between these two biological processes from theoretical viewpoint. However, it is hard to describe such relationship by using existed experimental data; moreover, the measurement of different development states is also difficult.
Here, by applying a novel time-ordered linear model based on a co-bisector which represents the joint direction of a series of vectors, we described the trajectories of development process by a line and showed different developmental states of tumor cells from developmental timescale perspective in a cell state space. This model was used to transform time-course developmental expression profiles of human ESCs, normal mouse liver, ovary and lung tissue into "cell developmental state lines". Then these cell state lines were applied to observe the developmental states of different tumors and their corresponding normal samples. Mouse liver and ovarian tumors showed different similarity to early development stage. Similarly, human glioma cells and ovarian tumors became developmentally "younger".
The time-ordered linear model captured linear projected development trajectories in a cell state space. Meanwhile it also reflected the change tendency of gene expression over time from the developmental timescale perspective, and our finding indicated different development states during tumorigenesis processes in different tissues.
Randomly connected recurrent networks of excitatory groups of neurons can possess a multitude of attractor states. When the internal excitatory synapses of these networks are depressing, the attractor states can be destabilized with increasing input. This leads to an itinerancy, where with either repeated transient stimuli, or increasing duration of a single stimulus, the network activity advances through sequences of attractor states. We find that the resulting network state, which persists beyond stimulus offset, can encode the number of stimuli presented via a distributed representation of neural activity with non-monotonic tuning curves for most neurons. Increased duration of a single stimulus is encoded via different distributed representations, so unlike an integrator, the network distinguishes separate successive presentations of a short stimulus from a single presentation of a longer stimulus with equal total duration. Moreover, different amplitudes of stimulus cause new, distinct activity patterns, such that changes in stimulus number, duration and amplitude can be distinguished from each other. These properties of the network depend on dynamic depressing synapses, as they disappear if synapses are static. Thus, short-term synaptic depression allows a network to store separately the different dynamic properties of a spatially constant stimulus.
short-term plasticity; dynamic synapses; attractor networks; short-term memory; distributed coding; high-dimensional representation
An endogenous molecular-cellular network for both normal and abnormal functions is assumed to exist. This endogenous network forms a nonlinear stochastic dynamical system, with many stable attractors in its functional landscape. Normal or abnormal robust states can be decided by this network in a manner similar to the neural network. In this context cancer is hypothesized as one of its robust intrinsic states.
This hypothesis implies that a nonlinear stochastic mathematical cancer model is constructible based on available experimental data and its quantitative prediction is directly testable. Within such model the genesis and progression of cancer may be viewed as stochastic transitions between different attractors. Thus it further suggests that progressions are not arbitrary. Other important issues on cancer, such as genetic vs epigenetics, double-edge effect, dormancy, are discussed in the light of present hypothesis. A different set of strategies for cancer prevention, cure, and care, is therefore suggested.
In this paper we introduce a new method to expressly use live/corporeal data in quantifying differences of time series data with an underlying limit cycle attractor; and apply it using an example of gait data. Our intention is to identify gait pattern differences between diverse situations and classify them on group and individual subject levels. First we approximated the limit cycle attractors, from which three measures were calculated: δM amounts to the difference between two attractors (a measure for the differences of two movements), δD computes the difference between the two associated deviations of the state vector away from the attractor (a measure for the change in movement variation), and δF, a combination of the previous two, is an index of the change. As an application we quantified these measures for walking on a treadmill under three different conditions: normal walking, dual task walking, and walking with additional weights at the ankle. The new method was able to successfully differentiate between the three walking conditions. Day to day repeatability, studied with repeated trials approximately one week apart, indicated excellent reliability for δM (ICCave > 0.73 with no differences across days; p > 0.05) and good reliability for δD (ICCave = 0.414 to 0.610 with no differences across days; p > 0.05). Based on the ability to detect differences in varying gait conditions and the good repeatability of the measures across days, the new method is recommended as an alternative to expensive and time consuming techniques of gait classification assessment. In particular, the new method is an easy to use diagnostic tool to quantify clinical changes in neurological patients.
The finding of a genome-wide oscillation in transcription that gates cells into S phase and coordinates mitochondrial and metabolic functions has altered our understanding of how the cell cycle is timed and how stable cellular phenotypes are maintained. Here we present the evidence and arguments in support of the idea that everything oscillates, and the rationale for viewing the cell as an attractor from which deterministic noise can be tuned by appropriate coupling among the many feedback loops, or regulons, that make up the transcriptional-respiratory attractor –cycle (TRAC). The existence of this attractor also explains many of the dynamic macroscopic properties of the cell cycle and appears to be the timekeeping oscillator in both cell cycles and circadian rhythms. The path taken by this primordial oscillator in the course of differentiation or drug response may involve period doubling behavior. Evidence for a relatively high frequency timekeeping oscillator in yeast and mammalian cells comes from expression array analysis and GCMS in the case of yeast and primarily from macroscopic measures of phase response to perturbation in the case of mammalian cells. Low-amplitude, genome-wide oscillations, a ubiquitous but often unrecognized attribute of phenotype, could be a source of seemingly intractable biological noise in microarray and proteomic studies. These oscillations in transcript and protein levels and the repeated cycles of synthesis and degradation they require, represent a high energy cost to the cell which must, from an evolutionary point of view, be recovered as essential information. We suggest that the information contained in this genome wide oscillation is the dynamic code that organizes a stable phenotype from an otherwise passive genome.
Mining gene expression profiles has proven valuable for identifying signatures serving as surrogates of cancer phenotypes. However, the similarities of such signatures across different cancer types have not been strong enough to conclude that they represent a universal biological mechanism shared among multiple cancer types. Here we present a computational method for generating signatures using an iterative process that converges to one of several precise attractors defining signatures representing biomolecular events, such as cell transdifferentiation or the presence of an amplicon. By analyzing rich gene expression datasets from different cancer types, we identified several such biomolecular events, some of which are universally present in all tested cancer types in nearly identical form. Although the method is unsupervised, we show that it often leads to attractors with strong phenotypic associations. We present several such multi-cancer attractors, focusing on three that are prominent and sharply defined in all cases: a mesenchymal transition attractor strongly associated with tumor stage, a mitotic chromosomal instability attractor strongly associated with tumor grade, and a lymphocyte-specific attractor.
Cancer is known to be characterized by several unifying biological capabilities or “hallmarks.” However, attempts to computationally identify patterns, such as gene expression signatures, shared across many different cancer types have been largely unsuccessful. A typical approach has been to classify samples into mutually exclusive subtypes, each of which is characterized by a particular gene signature. Although occasional similarities of such signatures in different cancer types exist, these similarities have not been sufficiently strong to conclude that they reflect the same biological event. By contrast, we have developed a computational methodology that has identified some signatures of co-expressed genes exhibiting remarkable similarity across many different cancer types. These signatures appear as stable “attractors” of an iterative computational procedure that tends to collect mutually associated genes, so that its convergence can point to the core (“heart”) of the underlying biological co-expression mechanism. One of these “pan-cancer” attractors corresponds to a transdifferentiation of cancer cells empowering them with invasiveness and motility. Another represents a mitotic chromosomal instability of cancer cells. A third attractor is lymphocyte-specific.
The orderly formation of the nervous system requires a multitude of complex, integrated and simultaneously occurring processes. Neural progenitor cells expand through proliferation, commit to different cell fates, exit the cell cycle, generate different neuronal and glial cell types, and new neurons migrate to specified areas and establish synaptic connections. Gestational and perinatal exposure to environmental toxicants, pharmacological agents and drugs of abuse produce immediate, persistent or late-onset alterations in behavioral, cognitive, sensory and/or motor functions. These alterations reflect the disruption of the underlying processes of CNS formation and development. To determine the neurotoxic mechanisms that underlie these deficits it is necessary to analyze and dissect the complex molecular processes that occur during the proliferation, neurogenesis and differentiation of cells. This symposium will provide a framework for understanding the orchestrated events of neurogenesis, the coordination of proliferation and cell fate specification by selected genes, and the effects of well-known neurotoxicants on neurogenesis in the retina, hippocampus and cerebellum. These three tissues share common developmental profiles, mediate diverse neuronal activities and function, and thus provide important substrates for analysis. This paper summarizes four invited talks that were presented at the 12th International Neurotoxicology Association meeting held in Jerusalem, Israel during the summer of 2009. Donald A. Fox described the structural and functional alterations following low-level gestational lead exposure in children and rodents that produced a supernormal electroretinogram and selective increases in neurogenesis and cell proliferation of late-born retinal neurons (rod photoreceptors and bipolar cells), but not Müller glia cells, in mice. Lisa Opanashuk discussed how dioxin [TCDD] binding to the arylhydrocarbon receptor [AhR], a transcription factor that regulates xenobiotic metabolizing enzymes and growth factors, increased granule cell formation and apoptosis in the developing mouse cerebellum. Alex Zharkovsky described how postnatal early postnatal lead exposure decreased cell proliferation, neurogenesis and gene expression in the dentate gyrus of the adult hippocampus and its resultant behavioral effects. Bernard Weiss illustrated how environmental endocrine disruptors produced age- and gender-dependent alterations in synaptogenesis and cognitive behavior.
retina; cerebellum; hippocampus; lead; dioxin; endocrine disrupters
It is well established that the variability of the neural activity across trials, as measured by the Fano factor, is elevated. This fact poses limits on information encoding by the neural activity. However, a series of recent neurophysiological experiments have changed this traditional view. Single cell recordings across a variety of species, brain areas, brain states and stimulus conditions demonstrate a remarkable reduction of the neural variability when an external stimulation is applied and when attention is allocated towards a stimulus within a neuron's receptive field, suggesting an enhancement of information encoding. Using an heterogeneously connected neural network model whose dynamics exhibits multiple attractors, we demonstrate here how this variability reduction can arise from a network effect. In the spontaneous state, we show that the high degree of neural variability is mainly due to fluctuation-driven excursions from attractor to attractor. This occurs when, in the parameter space, the network working point is around the bifurcation allowing multistable attractors. The application of an external excitatory drive by stimulation or attention stabilizes one specific attractor, eliminating in this way the transitions between the different attractors and resulting in a net decrease in neural variability over trials. Importantly, non-responsive neurons also exhibit a reduction of variability. Finally, this reduced variability is found to arise from an increased regularity of the neural spike trains. In conclusion, these results suggest that the variability reduction under stimulation and attention is a property of neural circuits.
To understand how neurons encode information, neuroscientists record their firing activity while the animal executes a given task for many trials. Surprisingly, it has been found that the neural response is highly variable, which a priori limits the encoding of information by these neurons. However, recent experiments have shown that this variability is reduced when the animal receives a stimulus or attends to a particular one, suggesting an enhancement of information encoding. It is known that a cause of neural variability resides in the fact that individual neurons receive an input which fluctuates around their firing threshold. We demonstrate here that all the experimental results can naturally arise from the dynamics of a neural network. Using a realistic model, we show that the neural variability during spontaneous activity is particularly high because input noise induces large fluctuations between multiple –but unstable- network states. With stimulation or attention, one particular network state is stabilized and fluctuations decrease, leading to a neural variability reduction. In conclusion, our results suggest that the observed variability reduction is a property of the neural circuits of the brain.
Measures of local dynamic stability, such as the local divergence exponent (λs*) quantify how quickly small perturbations deviate from an attractor that defines the motion. When the governing equations of motion are unknown, an attractor can be reconstructed by defining an appropriate state space. However, state space definitions are not unique and accepted methods for defining state spaces have not been established for biomechanical studies. This study first determined how different state space definitions affected λs* for the Lorenz attractor, since exact theoretical values were known a priori. Values of λs* exhibited errors < 10% for 7 of the 9 state spaces tested. State spaces containing redundant information performed the poorest. To examine these effects in a biomechanical context, 20 healthy subjects performed a repetitive sawing-like task for 5 minutes before and after fatigue. Local stability of pre- and post-fatigue shoulder movements was compared for 6 different state space definitions. Here, λs* decreased post-fatigue for all 6 state spaces. Differences were statistically significant for 3 of these state spaces. For state spaces defined using delay embedding, increasing the embedding dimension decreased λs* in both the Lorenz and experimental data. Overall, our findings suggest that direct numerical comparisons between studies that use different state space definitions should be made with caution. However, trends across experimental comparisons appear to persist. Biomechanical state spaces constructed using positions and velocities, or delay reconstruction of individual states, are likely to provide consistent results.
A fundamental problem in neuroscience is understanding how working memory—the ability to store information at intermediate timescales, like tens of seconds—is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.
A critical component of cognition is memory—the ability to store information, and to readily retrieve it on cue. Existing models postulate that recalled items are represented by self-sustained activity; that is, they are represented by activity that can exist in the absence of input. These models, however, are incomplete, in the sense that they do not explain two salient experimentally observed features of persistent activity: low firing rates and high neuronal variability. Here we propose a model that can explain both. The model makes two predictions: changes in synaptic weights during learning should be much smaller than the background weights, and the fraction of neurons selective for a memory should be above some threshold. Experimental confirmation of these predictions would provide strong support for the model, and constitute an important step toward a complete theory of memory storage and retrieval.