The balance between maintenance of the stem cell state and terminal differentiation is influenced by the cellular environment. The switching between these states has long been understood as a transition between attractor states of a molecular network. Herein, stochastic fluctuations are either suppressed or can trigger the transition, but they do not actually determine the attractor states.
We present a novel mathematical concept in which stem cell and progenitor population dynamics are described as a probabilistic process that arises from cell proliferation and small fluctuations in the state of differentiation. These state fluctuations reflect random transitions between different activation patterns of the underlying regulatory network. Importantly, the associated noise amplitudes are state-dependent and set by the environment. Their variability determines the attractor states, and thus actually governs population dynamics. This model quantitatively reproduces the observed dynamics of differentiation and dedifferentiation in promyelocytic precursor cells.
Consequently, state-specific noise modulation by external signals can be instrumental in controlling stem cell and progenitor population dynamics. We propose follow-up experiments for quantifying the imprinting influence of the environment on cellular noise regulation.
Representing and analyzing complex networks remains a roadblock to creating dynamic network models of biological processes and pathways. The study of cell fate transitions can reveal much about the transcriptional regulatory programs that underlie these phenotypic changes and give rise to the coordinated patterns in expression changes that we observe. The application of gene expression state space trajectories to capture cell fate transitions at the genome-wide level is one approach currently used in the literature. In this paper, we analyze the gene expression dataset of Huang et al. (2005) which follows the differentiation of promyelocytes into neutrophil-like cells in the presence of inducers dimethyl sulfoxide and all-trans retinoic acid. Huang et al. (2005) build on the work of Kauffman (2004) who raised the attractor hypothesis, stating that cells exist in an expression landscape and their expression trajectories converge towards attractive sites in this landscape. We propose an alternative interpretation that explains this convergent behavior by recognizing that there are two types of processes participating in these cell fate transitions—core processes that include the specific differentiation pathways of promyelocytes to neutrophils, and transient processes that capture those pathways and responses specific to the inducer. Using functional enrichment analyses, specific biological examples and an analysis of the trajectories and their core and transient components we provide a validation of our hypothesis using the Huang et al. (2005) dataset.
Understanding how cells differentiate from one state to another is a fundamental problem in biology with implications for better understanding evolution, the development of complex organisms from a single fertilized egg, and the etiology of human disease. One way to view these processes is to examine cells as “complex adaptive systems” where the state of all genes in a cell (more than 20,000 genes) determines that cell's “state” at a given point in time. In this view, differentiating cells move along a path in “state space” from one stable “attractor” to another. In a 2005 paper, Sui Huang and colleagues presented an experimental model in which they claimed to have evidence for such attractors and for the transitions between them. The problem with this approach is that although it is intuitively appealing, it lacks predictive power. Reanalyzing Huang's data, we demonstrate that there is an alternative interpretation that still allows for a state space description but which has greater ability to make testable predictions. Specifically, we show that these abstract state space trajectories can be mapped onto more well-known pathways and represented as a “core” differentiation pathway and “transient” processes that capture the effects of the treatments that initiate differentiation.
Stem cell differentiation and the maintenance of self-renewal are intrinsically complex processes requiring the coordinated dynamic expression of hundreds of genes and proteins in precise response to external signalling cues. Numerous recent reports have used both experimental and computational techniques to dissect this complexity. These reports suggest that the control of cell fate has both deterministic and stochastic elements: complex underlying regulatory networks define stable molecular ‘attractor’ states towards which individual cells are drawn over time, whereas stochastic fluctuations in gene and protein expression levels drive transitions between coexisting attractors, ensuring robustness at the population level.
Cell fate decision remarkably generates specific cell differentiation path among the multiple possibilities that can arise through the complex interplay of high-dimensional genome activities. The coordinated action of thousands of genes to switch cell fate decision has indicated the existence of stable attractors guiding the process. However, origins of the intracellular mechanisms that create “cellular attractor” still remain unknown. Here, we examined the collective behavior of genome-wide expressions for neutrophil differentiation through two different stimuli, dimethyl sulfoxide (DMSO) and all-trans-retinoic acid (atRA). To overcome the difficulties of dealing with single gene expression noises, we grouped genes into ensembles and analyzed their expression dynamics in correlation space defined by Pearson correlation and mutual information. The standard deviation of correlation distributions of gene ensembles reduces when the ensemble size is increased following the inverse square root law, for both ensembles chosen randomly from whole genome and ranked according to expression variances across time. Choosing the ensemble size of 200 genes, we show the two probability distributions of correlations of randomly selected genes for atRA and DMSO responses overlapped after 48 hours, defining the neutrophil attractor. Next, tracking the ranked ensembles' trajectories, we noticed that only certain, not all, fall into the attractor in a fractal-like manner. The removal of these genome elements from the whole genomes, for both atRA and DMSO responses, destroys the attractor providing evidence for the existence of specific genome elements (named “genome vehicle”) responsible for the neutrophil attractor. Notably, within the genome vehicles, genes with low or moderate expression changes, which are often considered noisy and insignificant, are essential components for the creation of the neutrophil attractor. Further investigations along with our findings might provide a comprehensive mechanistic view of cell fate decision.
Cell lineage commitment and differentiation are governed by a complex gene regulatory network. Disruption of these processes by inappropriate regulatory signals and by mutational rewiring of the network can lead to tumorigenesis. Cancer cells often exhibit immature or embryonic traits and dysregulated developmental genes can act as oncogenes. However, the prevailing paradigm of somatic evolution and multi-step tumorigenesis, while useful in many instances, offers no logically coherent reason for why oncogenesis recapitulates ontogenesis. The formal concept of “cancer attractors”, derived from an integrative, complex systems approach to gene regulatory network may provide a natural explanation. Here we present the theory of attractors in gene network dynamics and review the concept of cell types as attractors. We argue that cancer cells are trapped in abnormal attractors and discuss this concept in the light of recent ideas in cancer biology, including cancer genomics and cancer stem cells, as well as the implications for differentiation therapy.
Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.
Cell assemblies; Attractor networks; Grammar; Language; Behaviour
In contrast to the classical view of development as a preprogrammed and deterministic process, recent studies have demonstrated that stochastic perturbations of highly non-linear systems may underlie the emergence and stability of biological patterns. Herein, we address the question of whether noise contributes to the generation of the stereotypical temporal pattern in gene expression during flower development. We modeled the regulatory network of organ identity genes in the Arabidopsis thaliana flower as a stochastic system. This network has previously been shown to converge to ten fixed-point attractors, each with gene expression arrays that characterize inflorescence cells and primordial cells of sepals, petals, stamens, and carpels. The network used is binary, and the logical rules that govern its dynamics are grounded in experimental evidence. We introduced different levels of uncertainty in the updating rules of the network. Interestingly, for a level of noise of around 0.5–10%, the system exhibited a sequence of transitions among attractors that mimics the sequence of gene activation configurations observed in real flowers. We also implemented the gene regulatory network as a continuous system using the Glass model of differential equations, that can be considered as a first approximation of kinetic-reaction equations, but which are not necessarily equivalent to the Boolean model. Interestingly, the Glass dynamics recover a temporal sequence of attractors, that is qualitatively similar, although not identical, to that obtained using the Boolean model. Thus, time ordering in the emergence of cell-fate patterns is not an artifact of synchronous updating in the Boolean model. Therefore, our model provides a novel explanation for the emergence and robustness of the ubiquitous temporal pattern of floral organ specification. It also constitutes a new approach to understanding morphogenesis, providing predictions on the population dynamics of cells with different genetic configurations during development.
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.
Imagine driving a car when you hear the sentence, “Take the next left.” Immediately, auditory “left” neurons begin to fire. But to use this information a few seconds later when you reach the junction, these neurons should persist in their firing after the auditory stimulus has been removed. This persistent activity, believed to be the basis for working memory, is maintained by a recurrent neural network in memory-related cortical areas. Previous studies showed that a network can maintain memories of “which” stimulus was observed. It has recently been shown that synapses between excitatory cells in the prefrontal cortex, where persistent activity is often observed, exhibit activity-dependent dynamics. Different forms of synaptic dynamics such as depression and facilitation were observed. In this work, we use a mathematical model of a recurrent neural network to analyze the effect of introducing dynamic synapses in the context of persistent activity. We find that the initiation of persistent firing can depend on the duration of the input. These results open the possibility that recurrent neural networks can encode not only “which” stimulus was observed, but also for “how long.”
The gene regulatory circuit motif in which two opposing fate-determining transcription factors inhibit each other but activate themselves has been used in mathematical models of binary cell fate decisions in multipotent stem or progenitor cells. This simple circuit can generate multistability and explains the symmetric “poised” precursor state in which both factors are present in the cell at equal amounts as well as the resolution of this indeterminate state as the cell commits to either cell fate characterized by an asymmetric expression pattern of the two factors. This establishes the two alternative stable attractors that represent the two fate options. It has been debated whether cooperativity of molecular interactions is necessary to produce such multistability.
Here we take a general modeling approach and argue that this question is not relevant. We show that non-linearity can arise in two distinct models in which no explicit interaction between the two factors is assumed and that distinct chemical reaction kinetic formalisms can lead to the same (generic) dynamical system form. Moreover, we describe a novel type of bifurcation that produces a degenerate steady state that can explain the metastable state of indeterminacy prior to cell fate decision-making and is consistent with biological observations.
The general model presented here thus offers a novel principle for linking regulatory circuits with the state of indeterminacy characteristic of multipotent (stem) cells.
Resting state networks (RSNs) show a surprisingly coherent and robust spatiotemporal organization. Previous theoretical studies demonstrated that these patterns can be understood as emergent on the basis of the underlying neuroanatomical connectivity skeleton. Integrating the biologically realistic DTI/DSI-(Diffusion Tensor Imaging/Diffusion Spectrum Imaging)based neuroanatomical connectivity into a brain model of Ising spin dynamics, we found a system with multiple attractors, which can be studied analytically. The multistable attractor landscape thus defines a functionally meaningful dynamic repertoire of the brain network that is inherently present in the neuroanatomical connectivity. We demonstrate that the more entropy of attractors exists, the richer is the dynamical repertoire and consequently the brain network displays more capabilities of computation. We hypothesize therefore that human brain connectivity developed a scale free type of architecture in order to be able to store a large number of different and flexibly accessible brain functions.
computational neuroscience; fMRI modeling; ongoing activity; resting state; connectivity matrix
The prefrontal cortex (PFC) plays a crucial role in flexible cognitive behavior by representing task relevant information with its working memory. The working memory with sustained neural activity is described as a neural dynamical system composed of multiple attractors, each attractor of which corresponds to an active state of a cell assembly, representing a fragment of information. Recent studies have revealed that the PFC not only represents multiple sets of information but also switches multiple representations and transforms a set of information to another set depending on a given task context. This representational switching between different sets of information is possibly generated endogenously by flexible network dynamics but details of underlying mechanisms are unclear. Here we propose a dynamically reorganizable attractor network model based on certain internal changes in synaptic connectivity, or short-term plasticity. We construct a network model based on a spiking neuron model with dynamical synapses, which can qualitatively reproduce experimentally demonstrated representational switching in the PFC when a monkey was performing a goal-oriented action-planning task. The model holds multiple sets of information that are required for action planning before and after representational switching by reconfiguration of functional cell assemblies. Furthermore, we analyzed population dynamics of this model with a mean field model and show that the changes in cell assemblies' configuration correspond to those in attractor structure that can be viewed as a bifurcation process of the dynamical system. This dynamical reorganization of a neural network could be a key to uncovering the mechanism of flexible information processing in the PFC.
The prefrontal cortex plays a highly flexible role in various cognitive tasks e.g., decision making and action planning. Neurons in the prefrontal cortex exhibit flexible representation or selectivity for task relevant information and are involved in working memory with sustained activity, which can be modeled as attractor dynamics. Moreover, recent experiments revealed that prefrontal neurons not only represent parametric or discrete sets of information but also switch the representation and transform a set of information to another set in order to match the context of the required task. However, underlying mechanisms of this flexible representational switching are unknown. Here we propose a dynamically reorganizable attractor network model in which short-term modulation of the synaptic connections reconfigures the structure of neural attractors by assembly and disassembly of a network of cells to produce flexible attractor dynamics. On the basis of computer simulation as well as theoretical analysis, we showed that this model reproduced experimentally demonstrated representational switching, and that switching on certain characteristic axes defining neural dynamics well describes the essence of the representational switching. This model has the potential to provide unique insights about the flexible information representations and processing in the cortical network.
An endogenous molecular-cellular network for both normal and abnormal functions is assumed to exist. This endogenous network forms a nonlinear stochastic dynamical system, with many stable attractors in its functional landscape. Normal or abnormal robust states can be decided by this network in a manner similar to the neural network. In this context cancer is hypothesized as one of its robust intrinsic states.
This hypothesis implies that a nonlinear stochastic mathematical cancer model is constructible based on available experimental data and its quantitative prediction is directly testable. Within such model the genesis and progression of cancer may be viewed as stochastic transitions between different attractors. Thus it further suggests that progressions are not arbitrary. Other important issues on cancer, such as genetic vs epigenetics, double-edge effect, dormancy, are discussed in the light of present hypothesis. A different set of strategies for cancer prevention, cure, and care, is therefore suggested.
Mining gene expression profiles has proven valuable for identifying signatures serving as surrogates of cancer phenotypes. However, the similarities of such signatures across different cancer types have not been strong enough to conclude that they represent a universal biological mechanism shared among multiple cancer types. Here we present a computational method for generating signatures using an iterative process that converges to one of several precise attractors defining signatures representing biomolecular events, such as cell transdifferentiation or the presence of an amplicon. By analyzing rich gene expression datasets from different cancer types, we identified several such biomolecular events, some of which are universally present in all tested cancer types in nearly identical form. Although the method is unsupervised, we show that it often leads to attractors with strong phenotypic associations. We present several such multi-cancer attractors, focusing on three that are prominent and sharply defined in all cases: a mesenchymal transition attractor strongly associated with tumor stage, a mitotic chromosomal instability attractor strongly associated with tumor grade, and a lymphocyte-specific attractor.
Cancer is known to be characterized by several unifying biological capabilities or “hallmarks.” However, attempts to computationally identify patterns, such as gene expression signatures, shared across many different cancer types have been largely unsuccessful. A typical approach has been to classify samples into mutually exclusive subtypes, each of which is characterized by a particular gene signature. Although occasional similarities of such signatures in different cancer types exist, these similarities have not been sufficiently strong to conclude that they reflect the same biological event. By contrast, we have developed a computational methodology that has identified some signatures of co-expressed genes exhibiting remarkable similarity across many different cancer types. These signatures appear as stable “attractors” of an iterative computational procedure that tends to collect mutually associated genes, so that its convergence can point to the core (“heart”) of the underlying biological co-expression mechanism. One of these “pan-cancer” attractors corresponds to a transdifferentiation of cancer cells empowering them with invasiveness and motility. Another represents a mitotic chromosomal instability of cancer cells. A third attractor is lymphocyte-specific.
It is well established that the variability of the neural activity across trials, as measured by the Fano factor, is elevated. This fact poses limits on information encoding by the neural activity. However, a series of recent neurophysiological experiments have changed this traditional view. Single cell recordings across a variety of species, brain areas, brain states and stimulus conditions demonstrate a remarkable reduction of the neural variability when an external stimulation is applied and when attention is allocated towards a stimulus within a neuron's receptive field, suggesting an enhancement of information encoding. Using an heterogeneously connected neural network model whose dynamics exhibits multiple attractors, we demonstrate here how this variability reduction can arise from a network effect. In the spontaneous state, we show that the high degree of neural variability is mainly due to fluctuation-driven excursions from attractor to attractor. This occurs when, in the parameter space, the network working point is around the bifurcation allowing multistable attractors. The application of an external excitatory drive by stimulation or attention stabilizes one specific attractor, eliminating in this way the transitions between the different attractors and resulting in a net decrease in neural variability over trials. Importantly, non-responsive neurons also exhibit a reduction of variability. Finally, this reduced variability is found to arise from an increased regularity of the neural spike trains. In conclusion, these results suggest that the variability reduction under stimulation and attention is a property of neural circuits.
To understand how neurons encode information, neuroscientists record their firing activity while the animal executes a given task for many trials. Surprisingly, it has been found that the neural response is highly variable, which a priori limits the encoding of information by these neurons. However, recent experiments have shown that this variability is reduced when the animal receives a stimulus or attends to a particular one, suggesting an enhancement of information encoding. It is known that a cause of neural variability resides in the fact that individual neurons receive an input which fluctuates around their firing threshold. We demonstrate here that all the experimental results can naturally arise from the dynamics of a neural network. Using a realistic model, we show that the neural variability during spontaneous activity is particularly high because input noise induces large fluctuations between multiple –but unstable- network states. With stimulation or attention, one particular network state is stabilized and fluctuations decrease, leading to a neural variability reduction. In conclusion, our results suggest that the observed variability reduction is a property of the neural circuits of the brain.
Measures of local dynamic stability, such as the local divergence exponent (λs*) quantify how quickly small perturbations deviate from an attractor that defines the motion. When the governing equations of motion are unknown, an attractor can be reconstructed by defining an appropriate state space. However, state space definitions are not unique and accepted methods for defining state spaces have not been established for biomechanical studies. This study first determined how different state space definitions affected λs* for the Lorenz attractor, since exact theoretical values were known a priori. Values of λs* exhibited errors < 10% for 7 of the 9 state spaces tested. State spaces containing redundant information performed the poorest. To examine these effects in a biomechanical context, 20 healthy subjects performed a repetitive sawing-like task for 5 minutes before and after fatigue. Local stability of pre- and post-fatigue shoulder movements was compared for 6 different state space definitions. Here, λs* decreased post-fatigue for all 6 state spaces. Differences were statistically significant for 3 of these state spaces. For state spaces defined using delay embedding, increasing the embedding dimension decreased λs* in both the Lorenz and experimental data. Overall, our findings suggest that direct numerical comparisons between studies that use different state space definitions should be made with caution. However, trends across experimental comparisons appear to persist. Biomechanical state spaces constructed using positions and velocities, or delay reconstruction of individual states, are likely to provide consistent results.
Synaptic plasticity is an underlying mechanism of learning and memory in neural systems, but it is controversial whether synaptic efficacy is modulated in a graded or binary manner. It has been argued that binary synaptic weights would be less susceptible to noise than graded weights, which has impelled some theoretical neuroscientists to shift from the use of graded to binary weights in their models. We compare retrieval performance of models using both binary and graded weight representations through numerical simulations of stochastic attractor networks. We also investigate stochastic attractor models using multiple discrete levels of weight states, and then investigate the optimal threshold for dilution of binary weight representations. Our results show that a binary weight representation is not less susceptible to noise than a graded weight representation in stochastic attractor models, and we find that the load capacities with an increasing number of weight states rapidly reach the load capacity with graded weights. The optimal threshold for dilution of binary weight representations under stochastic conditions occurs when approximately 50% of the smallest weights are set to zero.
Synaptic plasticity; Binary versus graded; Associative memory; Point attractor networks
Tumor cells are considered to have an aberrant cell state, and some evidence indicates different development states appearing in the tumorigenesis. Embryonic development and stem cell differentiation are ordered processes in which the sequence of events over time is highly conserved. The "cancer attractor" concept integrates normal developmental processes and tumorigenesis into a high-dimensional "cell state space", and provides a reasonable explanation of the relationship between these two biological processes from theoretical viewpoint. However, it is hard to describe such relationship by using existed experimental data; moreover, the measurement of different development states is also difficult.
Here, by applying a novel time-ordered linear model based on a co-bisector which represents the joint direction of a series of vectors, we described the trajectories of development process by a line and showed different developmental states of tumor cells from developmental timescale perspective in a cell state space. This model was used to transform time-course developmental expression profiles of human ESCs, normal mouse liver, ovary and lung tissue into "cell developmental state lines". Then these cell state lines were applied to observe the developmental states of different tumors and their corresponding normal samples. Mouse liver and ovarian tumors showed different similarity to early development stage. Similarly, human glioma cells and ovarian tumors became developmentally "younger".
The time-ordered linear model captured linear projected development trajectories in a cell state space. Meanwhile it also reflected the change tendency of gene expression over time from the developmental timescale perspective, and our finding indicated different development states during tumorigenesis processes in different tissues.
Motivation: Primary purpose of modeling gene regulatory networks for developmental process is to reveal pathways governing the cellular differentiation to specific phenotypes. Knowledge of differentiation network will enable generation of desired cell fates by careful alteration of the governing network by adequate manipulation of cellular environment.
Results: We have developed a novel integer programming-based approach to reconstruct the underlying regulatory architecture of differentiating embryonic stem cells from discrete temporal gene expression data. The network reconstruction problem is formulated using inherent features of biological networks: (i) that of cascade architecture which enables treatment of the entire complex network as a set of interconnected modules and (ii) that of sparsity of interconnection between the transcription factors. The developed framework is applied to the system of embryonic stem cells differentiating towards pancreatic lineage. Experimentally determined expression profile dynamics of relevant transcription factors serve as the input to the network identification algorithm. The developed formulation accurately captures many of the known regulatory modes involved in pancreatic differentiation. The predictive capacity of the model is tested by simulating an in silico potential pathway of subsequent differentiation. The predicted pathway is experimentally verified by concurrent differentiation experiments. Experimental results agree well with model predictions, thereby illustrating the predictive accuracy of the proposed algorithm.
Supplementary information: Supplementary data are available at Bioinformatics online.
A Boolean network is a model used to study the interactions between different genes in genetic regulatory networks. In this paper, we present several algorithms using gene ordering and feedback vertex sets to identify singleton attractors and small attractors in Boolean networks. We analyze the average case time complexities of some of the proposed algorithms. For instance, it is shown that the outdegree-based ordering algorithm for finding singleton attractors works in time for , which is much faster than the naive time algorithm, where is the number of genes and is the maximum indegree. We performed extensive computational experiments on these algorithms, which resulted in good agreement with theoretical results. In contrast, we give a simple and complete proof for showing that finding an attractor with the shortest period is NP-hard.
Persistent activity observed in neurophysiological experiments in monkeys is thought to be the neuronal correlate of working memory. Over the last decade, network modellers have strived to reproduce the main features of these experiments. In particular, attractor network models have been proposed in which there is a coexistence between a non-selective attractor state with low background activity with selective attractor states in which sub-groups of neurons fire at rates which are higher (but not much higher) than background rates. A recent detailed statistical analysis of the data seems however to challenge such attractor models: the data indicates that firing during persistent activity is highly irregular (with an average CV larger than 1), while models predict a more regular firing process (CV smaller than 1). We discuss here recent proposals that allow to reproduce this feature of the experiments.
network model; integrate-and-fire neuron; working memory; prefrontal cortex; short-term depression
Induction of a specific transcriptional program by external signaling inputs is a crucial aspect of intracellular network functioning. The theoretical concept of coexisting attractors representing particular genetic programs is reasonably adapted to experimental observations of “genome-wide” expression profiles or phenotypes. Attractors can be associated either with developmental outcomes such as differentiation into specific types of cells, or maintenance of cell functioning such as proliferation or apoptosis. Here we review a mechanism known as speed-dependent cellular decision making (SdCDM) in a small epigenetic switch and generalize the concept to high-dimensional space. We demonstrate that high-dimensional network clustering capacity is dependent on the level of intrinsic noise and the speed at which external signals operate on the transcriptional landscape.
The classical pluripotency factors Oct4, Klf4, Sox2, and Nanog are required for the maintenance of pluripotency and self-renewal of embryonic stem (ES) cells and can reprogram terminally differentiated cells into a pluripotent state. Alteration in the levels of these factors in ES cells will cause differentiation into different lineages, suggesting that they are critical determinants of cell fates. These factors show dynamic expression patterns during embryogenesis, in particular in the pluripotent or multipotent cells of an early stage embryo, implying that they are involved in the cell fate decision during early embryonic development. Functions and the underlying molecular mechanisms have been extensively studied for these factors in ES cells under cultured conditions. However, this does not mean that the results also hold true for intact embryos. In the review, I have summarized and discussed the findings on the functions and the underlying mechanisms of the classical pluripotency factors during early embryogenesis, in particular during germ layer formation.
Pluripotency factors; Early embryogenesis; Germ layers; Cell fate decision; Xenopus
Regulatory interaction networks are often studied on their dynamical side (existence of attractors, study of their stability). We focus here also on their robustness, that is their ability to offer the same spatiotemporal patterns and to resist to external perturbations such as losses of nodes or edges in the networks interactions architecture, changes in their environmental boundary conditions as well as changes in the update schedule (or updating mode) of the states of their elements (e.g., if these elements are genes, their synchronous coexpression mode versus their sequential expression). We define the generic notions of boundary, core, and critical vertex or edge of the underlying interaction graph of the regulatory network, whose disappearance causes dramatic changes in the number and nature of attractors (e.g., passage from a bistable behaviour to a unique periodic regime) or in the range of their basins of stability. The dynamic transition of states will be presented in the framework of threshold Boolean automata rules. A panorama of applications at different levels will be given: brain and plant morphogenesis, bulbar cardio-respiratory regulation, glycolytic/oxidative metabolic coupling, and eventually cell cycle and feather morphogenesis genetic control.
robustness in regulatory interaction networks; attractors; interaction graph boundary; interaction graph core; critical node; critical edge; updating mode; microRNAs
A Boolean network model of the cell-cycle regulatory network of fission yeast (Schizosaccharomyces Pombe) is constructed solely on the basis of the known biochemical interaction topology. Simulating the model in the computer faithfully reproduces the known activity sequence of regulatory proteins along the cell cycle of the living cell. Contrary to existing differential equation models, no parameters enter the model except the structure of the regulatory circuitry. The dynamical properties of the model indicate that the biological dynamical sequence is robustly implemented in the regulatory network, with the biological stationary state G1 corresponding to the dominant attractor in state space, and with the biological regulatory sequence being a strongly attractive trajectory. Comparing the fission yeast cell-cycle model to a similar model of the corresponding network in S. cerevisiae, a remarkable difference in circuitry, as well as dynamics is observed. While the latter operates in a strongly damped mode, driven by external excitation, the S. pombe network represents an auto-excited system with external damping.
Cell fate reprogramming, such as the generation of insulin-producing β cells from other pancreas cells, can be achieved by external modulation of key transcription factors. However, the known gene regulatory interactions that form a complex network with multiple feedback loops make it increasingly difficult to design the cell reprogramming scheme because the linear regulatory pathways as schemes of causal influences upon cell lineages are inadequate for predicting the effect of transcriptional perturbation. However, sufficient information on regulatory networks is usually not available for detailed formal models. Here we demonstrate that by using the qualitatively described regulatory interactions as the basis for a coarse-grained dynamical ODE (ordinary differential equation) based model, it is possible to recapitulate the observed attractors of the exocrine and β, δ, α endocrine cells and to predict which gene perturbation can result in desired lineage reprogramming. Our model indicates that the constraints imposed by the incompletely elucidated regulatory network architecture suffice to build a predictive model for making informed decisions in choosing the set of transcription factors that need to be modulated for fate reprogramming.