In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches.
epistasis; gene regulatory networks; Boolean networks; feedback loops; feed-forward loops; temporal dynamics; modeling; gene interactions
Random Boolean networks (RBNs) are models of genetic regulatory networks. It is useful to describe RBNs as self-organizing systems to study how changes in the nodes and connections affect the global network dynamics. This article reviews eight different methods for guiding the self-organization of RBNs. In particular, the article is focused on guiding RBNs toward the critical dynamical regime, which is near the phase transition between the ordered and dynamical phases. The properties and advantages of the critical regime for life, computation, adaptability, evolvability, and robustness are reviewed. The guidance methods of RBNs can be used for engineering systems with the features of the critical regime, as well as for studying how natural selection evolved living systems, which are also critical.
Guided self-organization; Random Boolean networks; Phase transitions; Criticality; Adaptability; Evolvability; Robustness
Attractors represent the long-term behaviors of Random Boolean Networks. We study how the amount of information propagated between the nodes when on an attractor, as quantified by the average pairwise mutual information (), relates to the robustness of the attractor to perturbations (). We find that the dynamical regime of the network affects the relationship between and . In the ordered and chaotic regimes, is anti-correlated with , implying that attractors that are highly robust to perturbations have necessarily limited information propagation. Between order and chaos (for so-called “critical” networks) these quantities are uncorrelated. Finite size effects cause this behavior to be visible for a range of networks, from having a sensitivity of 1 to the point where is maximized. In this region, the two quantities are weakly correlated and attractors can be almost arbitrarily robust to perturbations without restricting the propagation of information in the network.
Accumulating experimental evidence suggests that the gene regulatory networks of living organisms operate in the critical phase, namely, at the transition between ordered and chaotic dynamics. Such critical dynamics of the network permits the coexistence of robustness and flexibility which are necessary to ensure homeostatic stability (of a given phenotype) while allowing for switching between multiple phenotypes (network states) as occurs in development and in response to environmental change. However, the mechanisms through which genetic networks evolve such critical behavior have remained elusive. Here we present an evolutionary model in which criticality naturally emerges from the need to balance between the two essential components of evolvability: phenotype conservation and phenotype innovation under mutations. We simulated the Darwinian evolution of random Boolean networks that mutate gene regulatory interactions and grow by gene duplication. The mutating networks were subjected to selection for networks that both (i) preserve all the already acquired phenotypes (dynamical attractor states) and (ii) generate new ones. Our results show that this interplay between extending the phenotypic landscape (innovation) while conserving the existing phenotypes (conservation) suffices to cause the evolution of all the networks in a population towards criticality. Furthermore, the networks produced by this evolutionary process exhibit structures with hubs (global regulators) similar to the observed topology of real gene regulatory networks. Thus, dynamical criticality and certain elementary topological properties of gene regulatory networks can emerge as a byproduct of the evolvability of the phenotypic landscape.
Dynamically critical systems are those which operate at the border of a phase transition between two behavioral regimes often present in complex systems: order and disorder. Critical systems exhibit remarkable properties such as fast information processing, collective response to perturbations or the ability to integrate a wide range of external stimuli without saturation. Recent evidence indicates that the genetic networks of living cells are dynamically critical. This has far reaching consequences, for it is at criticality that living organisms can tolerate a wide range of external fluctuations without changing the functionality of their phenotypes. Therefore, it is necessary to know how genetic criticality emerged through evolution. Here we show that dynamical criticality naturally emerges from the delicate balance between two fundamental forces of natural selection that make organisms evolve: (i) the existing phenotypes must be resilient to random mutations, and (ii) new phenotypes must emerge for the organisms to adapt to new environmental challenges. The joint effect of these two forces, which are essential for evolvability, is sufficient in our computational models to generate populations of genetic networks operating at criticality. Thus, natural selection acting as a tinkerer of evolvable systems naturally generates critical dynamics.
Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory.
Most current methods for gene regulatory network identification lead to the inference of steady-state networks, that is, networks prevalent over all times, a hypothesis which has been challenged. There has been a need to infer and represent networks in a dynamic, that is, time-varying fashion, in order to account for different cellular states affecting the interactions amongst genes. In this work, we present an approach, regime-SSM, to understand gene regulatory networks within such a dynamic setting. The approach uses a clustering method based on these underlying dynamics, followed by system identification using a state-space model for each learnt cluster—to infer a network adjacency matrix. We finally indicate our results on the mouse embryonic kidney dataset as well as the T-cell activation-based expression dataset and demonstrate conformity with reported experimental evidence.
The inference of reaction rate parameters in biochemical network models from time series concentration data is a central task in computational systems biology. Under the assumption of well mixed conditions the network dynamics are typically described by the chemical master equation, the Fokker Planck equation, the linear noise approximation or the macroscopic rate equation. The inverse problem of estimating the parameters of the underlying network model can be approached in deterministic and stochastic ways, and available methods often compare individual or mean concentration traces obtained from experiments with theoretical model predictions when maximizing likelihoods, minimizing regularized least squares functionals, approximating posterior distributions or sequentially processing the data. In this article we assume that the biological reaction network can be observed at least partially and repeatedly over time such that sample moments of species molecule numbers for various time points can be calculated from the data. Based on the chemical master equation we furthermore derive closed systems of parameter dependent nonlinear ordinary differential equations that predict the time evolution of the statistical moments. For inferring the reaction rate parameters we suggest to not only compare the sample mean with the theoretical mean prediction but also to take the residual of higher order moments explicitly into account. Cost functions that involve residuals of higher order moments may form landscapes in the parameter space that have more pronounced curvatures at the minimizer and hence may weaken or even overcome parameter sloppiness and uncertainty. As a consequence both deterministic and stochastic parameter inference algorithms may be improved with respect to accuracy and efficiency. We demonstrate the potential of moment fitting for parameter inference by means of illustrative stochastic biological models from the literature and address topics for future research.
A common problem in molecular biology is to use experimental data, such as microarray data, to infer knowledge about the structure of interactions between important molecules in subsystems of the cell. By approximating the state of each molecule as “on” or “off”, it becomes possible to simplify the problem, and exploit the tools of Boolean analysis for such inference. Amongst Boolean techniques, the process-driven approach has shown promise in being able to identify putative network structures, as well as stability and modularity properties. This paper examines the process-driven approach more formally, and makes four contributions about the computational complexity of the inference problem, under the “dominant inhibition” assumption of molecular interactions. The first is a proof that the feasibility problem (does there exist a network that explains the data?) can be solved in polynomial-time. Second, the minimality problem (what is the smallest network that explains the data?) is shown to be NP-hard, and therefore unlikely to result in a polynomial-time algorithm. Third, a simple polynomial-time heuristic is shown to produce near-minimal solutions, as demonstrated by simulation. Fourth, the theoretical framework explains how multiplicity (the number of network solutions to realize a given biological process), which can take exponential-time to compute, can instead be accurately estimated by a fast, polynomial-time heuristic.
The Boolean network paradigm is a simple and effective way to interpret genomic systems, but discovering the structure of these networks remains a difficult task. The minimum description length (MDL) principle has already been used for inferring genetic regulatory networks from time-series expression data and has proven useful for recovering the directed connections in Boolean networks. However, the existing method uses an ad hoc measure of description length that necessitates a tuning parameter for artificially balancing the model and error costs and, as a result, directly conflicts with the MDL principle's implied universality. In order to surpass this difficulty, we propose a novel MDL-based method in which the description length is a theoretical measure derived from a universal normalized maximum likelihood model. The search space is reduced by applying an implementable analogue of Kolmogorov's structure function. The performance of the proposed method is demonstrated on random synthetic networks, for which it is shown to improve upon previously published network inference algorithms with respect to both speed and accuracy. Finally, it is applied to time-series Drosophila gene expression measurements.
Network inference deals with the reconstruction of biological networks from experimental data. A variety of different reverse engineering techniques are available; they differ in the underlying assumptions and mathematical models used. One common problem for all approaches stems from the complexity of the task, due to the combinatorial explosion of different network topologies for increasing network size. To handle this problem, constraints are frequently used, for example on the node degree, number of edges, or constraints on regulation functions between network components. We propose to exploit topological considerations in the inference of gene regulatory networks. Such systems are often controlled by a small number of hub genes, while most other genes have only limited influence on the network's dynamic. We model gene regulation using a Bayesian network with discrete, Boolean nodes. A hierarchical prior is employed to identify hub genes. The first layer of the prior is used to regularize weights on edges emanating from one specific node. A second prior on hyperparameters controls the magnitude of the former regularization for different nodes. The net effect is that central nodes tend to form in reconstructed networks. Network reconstruction is then performed by maximization of or sampling from the posterior distribution. We evaluate our approach on simulated and real experimental data, indicating that we can reconstruct main regulatory interactions from the data. We furthermore compare our approach to other state-of-the art methods, showing superior performance in identifying hubs. Using a large publicly available dataset of over 800 cell cycle regulated genes, we are able to identify several main hub genes. Our method may thus provide a valuable tool to identify interesting candidate genes for further study. Furthermore, the approach presented may stimulate further developments in regularization methods for network reconstruction from data.
Regulatory networks play a central role in cellular behavior and decision making. Learning these regulatory networks is a major task in biology, and devising computational methods and mathematical models for this task is a major endeavor in bioinformatics. Boolean networks have been used extensively for modeling regulatory networks. In this model, the state of each gene can be either ‘on’ or ‘off’ and that next-state of a gene is updated, synchronously or asynchronously, according to a Boolean rule that is applied to the current-state of the entire system. Inferring a Boolean network from a set of experimental data entails two main steps: first, the experimental time-series data are discretized into Boolean trajectories, and then, a Boolean network is learned from these Boolean trajectories. In this paper, we consider three methods for data discretization, including a new one we propose, and three methods for learning Boolean networks, and study the performance of all possible nine combinations on four regulatory systems of varying dynamics complexities. We find that employing the right combination of methods for data discretization and network learning results in Boolean networks that capture the dynamics well and provide predictive power. Our findings are in contrast to a recent survey that placed Boolean networks on the low end of the “faithfulness to biological reality” and “ability to model dynamics” spectra. Further, contrary to the common argument in favor of Boolean networks, we find that a relatively large number of time points in the time-series data is required to learn good Boolean networks for certain data sets. Last but not least, while methods have been proposed for inferring Boolean networks, as discussed above, missing still are publicly available implementations thereof. Here, we make our implementation of the methods available publicly in open source at http://bioinfo.cs.rice.edu/.
Computational modeling of genomic regulation has become an important focus of systems biology and genomic signal processing for the past several years. It holds the promise to uncover both the structure and dynamical properties of the complex gene, protein or metabolic networks responsible for the cell functioning in various contexts and regimes. This, in turn, will lead to the development of optimal intervention strategies for prevention and control of disease. At the same time, constructing such computational models faces several challenges. High complexity is one of the major impediments for the practical applications of the models. Thus, reducing the size/complexity of a model becomes a critical issue in problems such as model selection, construction of tractable subnetwork models, and control of its dynamical behavior. We focus on the reduction problem in the context of two specific models of genomic regulation: Boolean networks with perturbation (BNP) and probabilistic Boolean networks (PBN). We also compare and draw a parallel between the reduction problem and two other important problems of computational modeling of genomic networks: the problem of network inference and the problem of designing external control policies for intervention/altering the dynamics of the model.
We study how the notions of importance of variables in Boolean functions as well as the sensitivities of the functions to changes in these variables impact the dynamical behavior of Boolean networks. The activity of a variable captures its influence on the output of the function and is a measure of that variable's importance. The average sensitivity of a Boolean function captures the smoothness of the function and is related to its internal homogeneity. In a random Boolean network, we show that the expected average sensitivity determines the well-known critical transition curve. We also discuss canalizing functions and the fact that the canalizing variables enjoy higher importance, as measured by their activities, than the noncanalizing variables. Finally, we demonstrate the important role of the average sensitivity in determining the dynamical behavior of a Boolean network.
The inference of biological networks from high-throughput data has received huge attention during the last decade and can be considered an important problem class in systems biology. However, it has been recognized that reliable network inference remains an unsolved problem. Most authors have identified lack of data and deficiencies in the inference algorithms as the main reasons for this situation.
We claim that another major difficulty for solving these inference problems is the frequent lack of uniqueness of many of these networks, especially when prior assumptions have not been taken properly into account. Our contributions aid the distinguishability analysis of chemical reaction network (CRN) models with mass action dynamics. The novel methods are based on linear programming (LP), therefore they allow the efficient analysis of CRNs containing several hundred complexes and reactions. Using these new tools and also previously published ones to obtain the network structure of biological systems from the literature, we find that, often, a unique topology cannot be determined, even if the structure of the corresponding mathematical model is assumed to be known and all dynamical variables are measurable. In other words, certain mechanisms may remain undetected (or they are falsely detected) while the inferred model is fully consistent with the measured data. It is also shown that sparsity enforcing approaches for determining 'true' reaction structures are generally not enough without additional prior information.
The inference of biological networks can be an extremely challenging problem even in the utopian case of perfect experimental information. Unfortunately, the practical situation is often more complex than that, since the measurements are typically incomplete, noisy and sometimes dynamically not rich enough, introducing further obstacles to the structure/parameter estimation process. In this paper, we show how the structural uniqueness and identifiability of the models can be guaranteed by carefully adding extra constraints, and that these important properties can be checked through appropriate computation methods.
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data.
Regulatory interaction networks are often studied on their dynamical side (existence of attractors, study of their stability). We focus here also on their robustness, that is their ability to offer the same spatiotemporal patterns and to resist to external perturbations such as losses of nodes or edges in the networks interactions architecture, changes in their environmental boundary conditions as well as changes in the update schedule (or updating mode) of the states of their elements (e.g., if these elements are genes, their synchronous coexpression mode versus their sequential expression). We define the generic notions of boundary, core, and critical vertex or edge of the underlying interaction graph of the regulatory network, whose disappearance causes dramatic changes in the number and nature of attractors (e.g., passage from a bistable behaviour to a unique periodic regime) or in the range of their basins of stability. The dynamic transition of states will be presented in the framework of threshold Boolean automata rules. A panorama of applications at different levels will be given: brain and plant morphogenesis, bulbar cardio-respiratory regulation, glycolytic/oxidative metabolic coupling, and eventually cell cycle and feather morphogenesis genetic control.
robustness in regulatory interaction networks; attractors; interaction graph boundary; interaction graph core; critical node; critical edge; updating mode; microRNAs
An increasing number of algorithms for biochemical network inference from experimental data require discrete data as input. For example, dynamic Bayesian network methods and methods that use the framework of finite dynamical systems, such as Boolean networks, all take discrete input. Experimental data, however, are typically continuous and represented by computer floating point numbers. The translation from continuous to discrete data is crucial in preserving the variable dependencies and thus has a significant impact on the performance of the network inference algorithms. We compare the performance of two such algorithms that use discrete data using several different discretization algorithms. One of the inference methods uses a dynamic Bayesian network framework, the other—a time-and state-discrete dynamical system framework. The discretization algorithms are quantile, interval discretization, and a new algorithm introduced in this article, SSD. SSD is especially designed for short time series data and is capable of determining the optimal number of discretization states. The experiments show that both inference methods perform better with SSD than with the other methods. In addition, SSD is demonstrated to preserve the dynamic features of the time series, as well as to be robust to noise in the experimental data. A C++ implementation of SSD is available from the authors at http://polymath.vbi.vt.edu/discretization.
gene networks; genetic algorithms; linear algebra; reverse engineering; time discrete dynamical systems
Reconstructing gene regulatory networks (GRNs) from expression data is one of the most important challenges in systems biology research. Many computational models and methods have been proposed to automate the process of network reconstruction. Inferring robust networks with desired behaviours remains challenging, however. This problem is related to network dynamics but has yet to be investigated using network modeling.
We propose an incremental evolution approach for inferring GRNs that takes network robustness into consideration and can deal with a large number of network parameters. Our approach includes a sensitivity analysis procedure to iteratively select the most influential network parameters, and it uses a swarm intelligence procedure to perform parameter optimization. We have conducted a series of experiments to evaluate the external behaviors and internal robustness of the networks inferred by the proposed approach. The results and analyses have verified the effectiveness of our approach.
Sensitivity analysis is crucial to identifying the most sensitive parameters that govern the network dynamics. It can further be used to derive constraints for network parameters in the network reconstruction process. The experimental results show that the proposed approach can successfully infer robust GRNs with desired system behaviors.
Reverse engineering in systems biology entails inference of gene regulatory networks from observational data. This data typically include gene expression measurements of wild type and mutant cells in response to a given stimulus. It has been shown that when more than one type of experiment is used in the network inference process the accuracy is higher. Therefore the development of generally applicable and effective methodologies that embed multiple sources of information in a single computational framework is a worthwhile objective.
This paper presents a new method for network inference, which uses multi-objective optimisation (MOO) to integrate multiple inference methods and experiments. We illustrate the potential of the methodology by combining ODE and correlation-based network inference procedures as well as time course and gene inactivation experiments. Here we show that our methodology is effective for a wide spectrum of data sets and method integration strategies.
The approach we present in this paper is flexible and can be used in any scenario that benefits from integration of multiple sources of information and modelling procedures in the inference process. Moreover, the application of this method to two case studies representative of bacteria and vertebrate systems has shown potential in identifying key regulators of important biological processes.
Acyl chain remodeling in lipids is a critical biochemical process that plays a central role in disease. However, remodeling remains poorly understood, despite massive increases in lipidomic data. In this work, we determine the dynamic network of ethanolamine glycerophospholipid (PE) remodeling, using data from pulse-chase experiments and a novel bioinformatic network inference approach. The model uses a set of ordinary differential equations based on the assumptions that (1) sn1 and sn2 acyl positions are independently remodeled; (2) remodeling reaction rates are constant over time; and (3) acyl donor concentrations are constant. We use a novel fast and accurate two-step algorithm to automatically infer model parameters and their values. This is the first such method applicable to dynamic phospholipid lipidomic data. Our inference procedure closely fits experimental measurements and shows strong cross-validation across six independent experiments with distinct deuterium-labeled PE precursors, demonstrating the validity of our assumptions. In constrast, fits of randomized data or fits using random model parameters are worse. A key outcome is that we are able to robustly distinguish deacylation and reacylation kinetics of individual acyl chain types at the sn1 and sn2 positions, explaining the established prevalence of saturated and unsaturated chains in the respective positions. The present study thus demonstrates that dynamic acyl chain remodeling processes can be reliably determined from dynamic lipidomic data.
The representation of a biochemical system as a network is the precursor of any mathematical model of the processes driving the dynamics of that system. Pharmacokinetics uses mathematical models to describe the interactions between drug, and drug metabolites and targets and through the simulation of these models predicts drug levels and/or dynamic behaviors of drug entities in the body. Therefore, the development of computational techniques for inferring the interaction network of the drug entities and its kinetic parameters from observational data is raising great interest in the scientific community of pharmacologists. In fact, the network inference is a set of mathematical procedures deducing the structure of a model from the experimental data associated to the nodes of the network of interactions. In this paper, we deal with the inference of a pharmacokinetic network from the concentrations of the drug and its metabolites observed at discrete time points.
The method of network inference presented in this paper is inspired by the theory of time-lagged correlation inference with regard to the deduction of the interaction network, and on a maximum likelihood approach with regard to the estimation of the kinetic parameters of the network. Both network inference and parameter estimation have been designed specifically to identify systems of biotransformations, at the biochemical level, from noisy time-resolved experimental data. We use our inference method to deduce the metabolic pathway of the gemcitabine. The inputs to our inference algorithm are the experimental time series of the concentration of gemcitabine and its metabolites. The output is the set of reactions of the metabolic network of the gemcitabine.
Time-lagged correlation based inference pairs up to a probabilistic model of parameter inference from metabolites time series allows the identification of the microscopic pharmacokinetics and pharmacodynamics of a drug with a minimal a priori knowledge. In fact, the inference model presented in this paper is completely unsupervised. It takes as input the time series of the concetrations of the parent drug and its metabolites. The method, applied to the case study of the gemcitabine pharmacokinetics, shows good accuracy and sensitivity.
Over the last decade, numerous computational methods have been developed in order to infer and model biological networks. Transcriptional networks in particular have attracted significant attention due to their critical role in cell survival. The majority of network inference methods use genome-wide experimental data to search for modules of genes with coherent expression profiles and common regulators, often ignoring the multi-layer structure of transcriptional cascades. Modeling methodologies on the other hand assume a given network structure and vary significantly in their algorithmic approach, ranging from over-simplified representations (e.g., Boolean networks) to detailed -but computationally expensive-network simulations (e.g., with differential equations). In this work we use Artificial Neural Networks (ANNs) to model transcriptional regulatory cascades that emerge during the stress response in Saccharomyces cerevisiae and extend in three layers. We confine the structure of the ANNs to match the structure of the biological networks as determined by gene expression, DNA-protein interaction and experimental evidence provided in publicly available databases. Trained ANNs are able to predict the expression profile of 11 target genes across multiple experimental conditions with a correlation coefficient >0.7. When time-dependent interactions between upstream transcription factors (TFs) and their indirect targets are also included in the ANNs, accurate predictions are achieved for 30/34 target genes. Moreover, heterodimer formation is taken into account. We show that ANNs can be used to (1) accurately predict the expression of downstream genes in a 3-layer transcriptional cascade based on the expression of their indirect regulators and (2) infer the condition- and time-dependent activity of various TFs as well as during heterodimer formation. We show that a three-layer regulatory cascade whose structure is determined by co-expressed gene modules and their regulators can successfully be modeled using ANNs with a similar configuration.
Artificial Neural Networks; transcriptional regulatory networks; yeast stress response; three layers regulatory cascades; asynchronous regulation; heterodimers
Phenomenological information about regulatory interactions is frequently available and can be readily converted to Boolean models. Fully quantitative models, on the other hand, provide detailed insights into the precise dynamics of the underlying system. In order to connect discrete and continuous modeling approaches, methods for the conversion of Boolean systems into systems of ordinary differential equations have been developed recently. As biological interaction networks have steadily grown in size and complexity, a fully automated framework for the conversion process is desirable.
We present Odefy, a MATLAB- and Octave-compatible toolbox for the automated transformation of Boolean models into systems of ordinary differential equations. Models can be created from sets of Boolean equations or graph representations of Boolean networks. Alternatively, the user can import Boolean models from the CellNetAnalyzer toolbox, GINSim and the PBN toolbox. The Boolean models are transformed to systems of ordinary differential equations by multivariate polynomial interpolation and optional application of sigmoidal Hill functions. Our toolbox contains basic simulation and visualization functionalities for both, the Boolean as well as the continuous models. For further analyses, models can be exported to SQUAD, GNA, MATLAB script files, the SB toolbox, SBML and R script files. Odefy contains a user-friendly graphical user interface for convenient access to the simulation and exporting functionalities. We illustrate the validity of our transformation approach as well as the usage and benefit of the Odefy toolbox for two biological systems: a mutual inhibitory switch known from stem cell differentiation and a regulatory network giving rise to a specific spatial expression pattern at the mid-hindbrain boundary.
Odefy provides an easy-to-use toolbox for the automatic conversion of Boolean models to systems of ordinary differential equations. It can be efficiently connected to a variety of input and output formats for further analysis and investigations. The toolbox is open-source and can be downloaded at http://cmb.helmholtz-muenchen.de/odefy.
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content.
Critical dynamics are assumed to be an attractive mode for normal brain functioning as information processing and computational capabilities are found to be optimal in the critical state. Recent experimental observations of neuronal activity patterns following power-law distributions, a hallmark of systems at a critical state, have led to the hypothesis that human brain dynamics could be poised at a phase transition between ordered and disordered activity. A so far unresolved question concerns the medical significance of critical brain activity and how it relates to pathological conditions. Using data from invasive electroencephalogram recordings from humans we show that during epileptic seizure attacks neuronal activity patterns deviate from the normally observed power-law distribution characterizing critical dynamics. The comparison of these observations to results from a computational model exhibiting self-organized criticality (SOC) based on adaptive networks allows further insights into the underlying dynamics. Together these results suggest that brain dynamics deviates from criticality during seizures caused by the failure of adaptive SOC.
Over the recent years it has become apparent that the concept of phase transitions is not only applicable to the systems classically considered in physics. It applies to a much wider class of complex systems exhibiting phases, characterized by qualitatively different types of long-term behavior. In the critical states, which are located directly at the transition, small changes can have a large effect on the system. This and other properties of critical states prove to be advantageous for computation and memory. It is therefore suspected that also cerebral neural networks operate close to criticality. This is supported by the in vitro and in vivo measurements of power-laws of certain scaling relationships that are the hallmarks of phase transitions. While critical dynamics is arguably an attractive mode of normal brain functioning, its relation to pathological brain conditions is still unresolved. Here we show that brain dynamics deviates from a critical state during epileptic seizure attacks in vivo. Furthermore, insights from a computational model suggest seizures to be caused by the failure of adaptive self-organized criticality, a mechanism of self-organization to criticality based on the interplay between network dynamics and topology.