The balance between maintenance of the stem cell state and terminal differentiation is influenced by the cellular environment. The switching between these states has long been understood as a transition between attractor states of a molecular network. Herein, stochastic fluctuations are either suppressed or can trigger the transition, but they do not actually determine the attractor states.
We present a novel mathematical concept in which stem cell and progenitor population dynamics are described as a probabilistic process that arises from cell proliferation and small fluctuations in the state of differentiation. These state fluctuations reflect random transitions between different activation patterns of the underlying regulatory network. Importantly, the associated noise amplitudes are state-dependent and set by the environment. Their variability determines the attractor states, and thus actually governs population dynamics. This model quantitatively reproduces the observed dynamics of differentiation and dedifferentiation in promyelocytic precursor cells.
Consequently, state-specific noise modulation by external signals can be instrumental in controlling stem cell and progenitor population dynamics. We propose follow-up experiments for quantifying the imprinting influence of the environment on cellular noise regulation.
Representing and analyzing complex networks remains a roadblock to creating dynamic network models of biological processes and pathways. The study of cell fate transitions can reveal much about the transcriptional regulatory programs that underlie these phenotypic changes and give rise to the coordinated patterns in expression changes that we observe. The application of gene expression state space trajectories to capture cell fate transitions at the genome-wide level is one approach currently used in the literature. In this paper, we analyze the gene expression dataset of Huang et al. (2005) which follows the differentiation of promyelocytes into neutrophil-like cells in the presence of inducers dimethyl sulfoxide and all-trans retinoic acid. Huang et al. (2005) build on the work of Kauffman (2004) who raised the attractor hypothesis, stating that cells exist in an expression landscape and their expression trajectories converge towards attractive sites in this landscape. We propose an alternative interpretation that explains this convergent behavior by recognizing that there are two types of processes participating in these cell fate transitions—core processes that include the specific differentiation pathways of promyelocytes to neutrophils, and transient processes that capture those pathways and responses specific to the inducer. Using functional enrichment analyses, specific biological examples and an analysis of the trajectories and their core and transient components we provide a validation of our hypothesis using the Huang et al. (2005) dataset.
Understanding how cells differentiate from one state to another is a fundamental problem in biology with implications for better understanding evolution, the development of complex organisms from a single fertilized egg, and the etiology of human disease. One way to view these processes is to examine cells as “complex adaptive systems” where the state of all genes in a cell (more than 20,000 genes) determines that cell's “state” at a given point in time. In this view, differentiating cells move along a path in “state space” from one stable “attractor” to another. In a 2005 paper, Sui Huang and colleagues presented an experimental model in which they claimed to have evidence for such attractors and for the transitions between them. The problem with this approach is that although it is intuitively appealing, it lacks predictive power. Reanalyzing Huang's data, we demonstrate that there is an alternative interpretation that still allows for a state space description but which has greater ability to make testable predictions. Specifically, we show that these abstract state space trajectories can be mapped onto more well-known pathways and represented as a “core” differentiation pathway and “transient” processes that capture the effects of the treatments that initiate differentiation.
Stem cell differentiation and the maintenance of self-renewal are intrinsically complex processes requiring the coordinated dynamic expression of hundreds of genes and proteins in precise response to external signalling cues. Numerous recent reports have used both experimental and computational techniques to dissect this complexity. These reports suggest that the control of cell fate has both deterministic and stochastic elements: complex underlying regulatory networks define stable molecular ‘attractor’ states towards which individual cells are drawn over time, whereas stochastic fluctuations in gene and protein expression levels drive transitions between coexisting attractors, ensuring robustness at the population level.
Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems.
The software package is available at https://sites.google.com/site/desheng619/download.
Cell fate decision remarkably generates specific cell differentiation path among the multiple possibilities that can arise through the complex interplay of high-dimensional genome activities. The coordinated action of thousands of genes to switch cell fate decision has indicated the existence of stable attractors guiding the process. However, origins of the intracellular mechanisms that create “cellular attractor” still remain unknown. Here, we examined the collective behavior of genome-wide expressions for neutrophil differentiation through two different stimuli, dimethyl sulfoxide (DMSO) and all-trans-retinoic acid (atRA). To overcome the difficulties of dealing with single gene expression noises, we grouped genes into ensembles and analyzed their expression dynamics in correlation space defined by Pearson correlation and mutual information. The standard deviation of correlation distributions of gene ensembles reduces when the ensemble size is increased following the inverse square root law, for both ensembles chosen randomly from whole genome and ranked according to expression variances across time. Choosing the ensemble size of 200 genes, we show the two probability distributions of correlations of randomly selected genes for atRA and DMSO responses overlapped after 48 hours, defining the neutrophil attractor. Next, tracking the ranked ensembles' trajectories, we noticed that only certain, not all, fall into the attractor in a fractal-like manner. The removal of these genome elements from the whole genomes, for both atRA and DMSO responses, destroys the attractor providing evidence for the existence of specific genome elements (named “genome vehicle”) responsible for the neutrophil attractor. Notably, within the genome vehicles, genes with low or moderate expression changes, which are often considered noisy and insignificant, are essential components for the creation of the neutrophil attractor. Further investigations along with our findings might provide a comprehensive mechanistic view of cell fate decision.
Cell lineage commitment and differentiation are governed by a complex gene regulatory network. Disruption of these processes by inappropriate regulatory signals and by mutational rewiring of the network can lead to tumorigenesis. Cancer cells often exhibit immature or embryonic traits and dysregulated developmental genes can act as oncogenes. However, the prevailing paradigm of somatic evolution and multi-step tumorigenesis, while useful in many instances, offers no logically coherent reason for why oncogenesis recapitulates ontogenesis. The formal concept of “cancer attractors”, derived from an integrative, complex systems approach to gene regulatory network may provide a natural explanation. Here we present the theory of attractors in gene network dynamics and review the concept of cell types as attractors. We argue that cancer cells are trapped in abnormal attractors and discuss this concept in the light of recent ideas in cancer biology, including cancer genomics and cancer stem cells, as well as the implications for differentiation therapy.
Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.
Cell assemblies; Attractor networks; Grammar; Language; Behaviour
The gene regulatory circuit motif in which two opposing fate-determining transcription factors inhibit each other but activate themselves has been used in mathematical models of binary cell fate decisions in multipotent stem or progenitor cells. This simple circuit can generate multistability and explains the symmetric “poised” precursor state in which both factors are present in the cell at equal amounts as well as the resolution of this indeterminate state as the cell commits to either cell fate characterized by an asymmetric expression pattern of the two factors. This establishes the two alternative stable attractors that represent the two fate options. It has been debated whether cooperativity of molecular interactions is necessary to produce such multistability.
Here we take a general modeling approach and argue that this question is not relevant. We show that non-linearity can arise in two distinct models in which no explicit interaction between the two factors is assumed and that distinct chemical reaction kinetic formalisms can lead to the same (generic) dynamical system form. Moreover, we describe a novel type of bifurcation that produces a degenerate steady state that can explain the metastable state of indeterminacy prior to cell fate decision-making and is consistent with biological observations.
The general model presented here thus offers a novel principle for linking regulatory circuits with the state of indeterminacy characteristic of multipotent (stem) cells.
In contrast to the classical view of development as a preprogrammed and deterministic process, recent studies have demonstrated that stochastic perturbations of highly non-linear systems may underlie the emergence and stability of biological patterns. Herein, we address the question of whether noise contributes to the generation of the stereotypical temporal pattern in gene expression during flower development. We modeled the regulatory network of organ identity genes in the Arabidopsis thaliana flower as a stochastic system. This network has previously been shown to converge to ten fixed-point attractors, each with gene expression arrays that characterize inflorescence cells and primordial cells of sepals, petals, stamens, and carpels. The network used is binary, and the logical rules that govern its dynamics are grounded in experimental evidence. We introduced different levels of uncertainty in the updating rules of the network. Interestingly, for a level of noise of around 0.5–10%, the system exhibited a sequence of transitions among attractors that mimics the sequence of gene activation configurations observed in real flowers. We also implemented the gene regulatory network as a continuous system using the Glass model of differential equations, that can be considered as a first approximation of kinetic-reaction equations, but which are not necessarily equivalent to the Boolean model. Interestingly, the Glass dynamics recover a temporal sequence of attractors, that is qualitatively similar, although not identical, to that obtained using the Boolean model. Thus, time ordering in the emergence of cell-fate patterns is not an artifact of synchronous updating in the Boolean model. Therefore, our model provides a novel explanation for the emergence and robustness of the ubiquitous temporal pattern of floral organ specification. It also constitutes a new approach to understanding morphogenesis, providing predictions on the population dynamics of cells with different genetic configurations during development.
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.
Imagine driving a car when you hear the sentence, “Take the next left.” Immediately, auditory “left” neurons begin to fire. But to use this information a few seconds later when you reach the junction, these neurons should persist in their firing after the auditory stimulus has been removed. This persistent activity, believed to be the basis for working memory, is maintained by a recurrent neural network in memory-related cortical areas. Previous studies showed that a network can maintain memories of “which” stimulus was observed. It has recently been shown that synapses between excitatory cells in the prefrontal cortex, where persistent activity is often observed, exhibit activity-dependent dynamics. Different forms of synaptic dynamics such as depression and facilitation were observed. In this work, we use a mathematical model of a recurrent neural network to analyze the effect of introducing dynamic synapses in the context of persistent activity. We find that the initiation of persistent firing can depend on the duration of the input. These results open the possibility that recurrent neural networks can encode not only “which” stimulus was observed, but also for “how long.”
Resting state networks (RSNs) show a surprisingly coherent and robust spatiotemporal organization. Previous theoretical studies demonstrated that these patterns can be understood as emergent on the basis of the underlying neuroanatomical connectivity skeleton. Integrating the biologically realistic DTI/DSI-(Diffusion Tensor Imaging/Diffusion Spectrum Imaging)based neuroanatomical connectivity into a brain model of Ising spin dynamics, we found a system with multiple attractors, which can be studied analytically. The multistable attractor landscape thus defines a functionally meaningful dynamic repertoire of the brain network that is inherently present in the neuroanatomical connectivity. We demonstrate that the more entropy of attractors exists, the richer is the dynamical repertoire and consequently the brain network displays more capabilities of computation. We hypothesize therefore that human brain connectivity developed a scale free type of architecture in order to be able to store a large number of different and flexibly accessible brain functions.
computational neuroscience; fMRI modeling; ongoing activity; resting state; connectivity matrix
It is well established that the variability of the neural activity across trials, as measured by the Fano factor, is elevated. This fact poses limits on information encoding by the neural activity. However, a series of recent neurophysiological experiments have changed this traditional view. Single cell recordings across a variety of species, brain areas, brain states and stimulus conditions demonstrate a remarkable reduction of the neural variability when an external stimulation is applied and when attention is allocated towards a stimulus within a neuron's receptive field, suggesting an enhancement of information encoding. Using an heterogeneously connected neural network model whose dynamics exhibits multiple attractors, we demonstrate here how this variability reduction can arise from a network effect. In the spontaneous state, we show that the high degree of neural variability is mainly due to fluctuation-driven excursions from attractor to attractor. This occurs when, in the parameter space, the network working point is around the bifurcation allowing multistable attractors. The application of an external excitatory drive by stimulation or attention stabilizes one specific attractor, eliminating in this way the transitions between the different attractors and resulting in a net decrease in neural variability over trials. Importantly, non-responsive neurons also exhibit a reduction of variability. Finally, this reduced variability is found to arise from an increased regularity of the neural spike trains. In conclusion, these results suggest that the variability reduction under stimulation and attention is a property of neural circuits.
To understand how neurons encode information, neuroscientists record their firing activity while the animal executes a given task for many trials. Surprisingly, it has been found that the neural response is highly variable, which a priori limits the encoding of information by these neurons. However, recent experiments have shown that this variability is reduced when the animal receives a stimulus or attends to a particular one, suggesting an enhancement of information encoding. It is known that a cause of neural variability resides in the fact that individual neurons receive an input which fluctuates around their firing threshold. We demonstrate here that all the experimental results can naturally arise from the dynamics of a neural network. Using a realistic model, we show that the neural variability during spontaneous activity is particularly high because input noise induces large fluctuations between multiple –but unstable- network states. With stimulation or attention, one particular network state is stabilized and fluctuations decrease, leading to a neural variability reduction. In conclusion, our results suggest that the observed variability reduction is a property of the neural circuits of the brain.
The finding of a genome-wide oscillation in transcription that gates cells into S phase and coordinates mitochondrial and metabolic functions has altered our understanding of how the cell cycle is timed and how stable cellular phenotypes are maintained. Here we present the evidence and arguments in support of the idea that everything oscillates, and the rationale for viewing the cell as an attractor from which deterministic noise can be tuned by appropriate coupling among the many feedback loops, or regulons, that make up the transcriptional-respiratory attractor –cycle (TRAC). The existence of this attractor also explains many of the dynamic macroscopic properties of the cell cycle and appears to be the timekeeping oscillator in both cell cycles and circadian rhythms. The path taken by this primordial oscillator in the course of differentiation or drug response may involve period doubling behavior. Evidence for a relatively high frequency timekeeping oscillator in yeast and mammalian cells comes from expression array analysis and GCMS in the case of yeast and primarily from macroscopic measures of phase response to perturbation in the case of mammalian cells. Low-amplitude, genome-wide oscillations, a ubiquitous but often unrecognized attribute of phenotype, could be a source of seemingly intractable biological noise in microarray and proteomic studies. These oscillations in transcript and protein levels and the repeated cycles of synthesis and degradation they require, represent a high energy cost to the cell which must, from an evolutionary point of view, be recovered as essential information. We suggest that the information contained in this genome wide oscillation is the dynamic code that organizes a stable phenotype from an otherwise passive genome.
The prefrontal cortex (PFC) plays a crucial role in flexible cognitive behavior by representing task relevant information with its working memory. The working memory with sustained neural activity is described as a neural dynamical system composed of multiple attractors, each attractor of which corresponds to an active state of a cell assembly, representing a fragment of information. Recent studies have revealed that the PFC not only represents multiple sets of information but also switches multiple representations and transforms a set of information to another set depending on a given task context. This representational switching between different sets of information is possibly generated endogenously by flexible network dynamics but details of underlying mechanisms are unclear. Here we propose a dynamically reorganizable attractor network model based on certain internal changes in synaptic connectivity, or short-term plasticity. We construct a network model based on a spiking neuron model with dynamical synapses, which can qualitatively reproduce experimentally demonstrated representational switching in the PFC when a monkey was performing a goal-oriented action-planning task. The model holds multiple sets of information that are required for action planning before and after representational switching by reconfiguration of functional cell assemblies. Furthermore, we analyzed population dynamics of this model with a mean field model and show that the changes in cell assemblies' configuration correspond to those in attractor structure that can be viewed as a bifurcation process of the dynamical system. This dynamical reorganization of a neural network could be a key to uncovering the mechanism of flexible information processing in the PFC.
The prefrontal cortex plays a highly flexible role in various cognitive tasks e.g., decision making and action planning. Neurons in the prefrontal cortex exhibit flexible representation or selectivity for task relevant information and are involved in working memory with sustained activity, which can be modeled as attractor dynamics. Moreover, recent experiments revealed that prefrontal neurons not only represent parametric or discrete sets of information but also switch the representation and transform a set of information to another set in order to match the context of the required task. However, underlying mechanisms of this flexible representational switching are unknown. Here we propose a dynamically reorganizable attractor network model in which short-term modulation of the synaptic connections reconfigures the structure of neural attractors by assembly and disassembly of a network of cells to produce flexible attractor dynamics. On the basis of computer simulation as well as theoretical analysis, we showed that this model reproduced experimentally demonstrated representational switching, and that switching on certain characteristic axes defining neural dynamics well describes the essence of the representational switching. This model has the potential to provide unique insights about the flexible information representations and processing in the cortical network.
Synaptic plasticity is an underlying mechanism of learning and memory in neural systems, but it is controversial whether synaptic efficacy is modulated in a graded or binary manner. It has been argued that binary synaptic weights would be less susceptible to noise than graded weights, which has impelled some theoretical neuroscientists to shift from the use of graded to binary weights in their models. We compare retrieval performance of models using both binary and graded weight representations through numerical simulations of stochastic attractor networks. We also investigate stochastic attractor models using multiple discrete levels of weight states, and then investigate the optimal threshold for dilution of binary weight representations. Our results show that a binary weight representation is not less susceptible to noise than a graded weight representation in stochastic attractor models, and we find that the load capacities with an increasing number of weight states rapidly reach the load capacity with graded weights. The optimal threshold for dilution of binary weight representations under stochastic conditions occurs when approximately 50% of the smallest weights are set to zero.
Synaptic plasticity; Binary versus graded; Associative memory; Point attractor networks
Motivation: Primary purpose of modeling gene regulatory networks for developmental process is to reveal pathways governing the cellular differentiation to specific phenotypes. Knowledge of differentiation network will enable generation of desired cell fates by careful alteration of the governing network by adequate manipulation of cellular environment.
Results: We have developed a novel integer programming-based approach to reconstruct the underlying regulatory architecture of differentiating embryonic stem cells from discrete temporal gene expression data. The network reconstruction problem is formulated using inherent features of biological networks: (i) that of cascade architecture which enables treatment of the entire complex network as a set of interconnected modules and (ii) that of sparsity of interconnection between the transcription factors. The developed framework is applied to the system of embryonic stem cells differentiating towards pancreatic lineage. Experimentally determined expression profile dynamics of relevant transcription factors serve as the input to the network identification algorithm. The developed formulation accurately captures many of the known regulatory modes involved in pancreatic differentiation. The predictive capacity of the model is tested by simulating an in silico potential pathway of subsequent differentiation. The predicted pathway is experimentally verified by concurrent differentiation experiments. Experimental results agree well with model predictions, thereby illustrating the predictive accuracy of the proposed algorithm.
Supplementary information: Supplementary data are available at Bioinformatics online.
An endogenous molecular-cellular network for both normal and abnormal functions is assumed to exist. This endogenous network forms a nonlinear stochastic dynamical system, with many stable attractors in its functional landscape. Normal or abnormal robust states can be decided by this network in a manner similar to the neural network. In this context cancer is hypothesized as one of its robust intrinsic states.
This hypothesis implies that a nonlinear stochastic mathematical cancer model is constructible based on available experimental data and its quantitative prediction is directly testable. Within such model the genesis and progression of cancer may be viewed as stochastic transitions between different attractors. Thus it further suggests that progressions are not arbitrary. Other important issues on cancer, such as genetic vs epigenetics, double-edge effect, dormancy, are discussed in the light of present hypothesis. A different set of strategies for cancer prevention, cure, and care, is therefore suggested.
Persistent activity observed in neurophysiological experiments in monkeys is thought to be the neuronal correlate of working memory. Over the last decade, network modellers have strived to reproduce the main features of these experiments. In particular, attractor network models have been proposed in which there is a coexistence between a non-selective attractor state with low background activity with selective attractor states in which sub-groups of neurons fire at rates which are higher (but not much higher) than background rates. A recent detailed statistical analysis of the data seems however to challenge such attractor models: the data indicates that firing during persistent activity is highly irregular (with an average CV larger than 1), while models predict a more regular firing process (CV smaller than 1). We discuss here recent proposals that allow to reproduce this feature of the experiments.
network model; integrate-and-fire neuron; working memory; prefrontal cortex; short-term depression
Induction of a specific transcriptional program by external signaling inputs is a crucial aspect of intracellular network functioning. The theoretical concept of coexisting attractors representing particular genetic programs is reasonably adapted to experimental observations of “genome-wide” expression profiles or phenotypes. Attractors can be associated either with developmental outcomes such as differentiation into specific types of cells, or maintenance of cell functioning such as proliferation or apoptosis. Here we review a mechanism known as speed-dependent cellular decision making (SdCDM) in a small epigenetic switch and generalize the concept to high-dimensional space. We demonstrate that high-dimensional network clustering capacity is dependent on the level of intrinsic noise and the speed at which external signals operate on the transcriptional landscape.
Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics.
Randomly connected recurrent networks of excitatory groups of neurons can possess a multitude of attractor states. When the internal excitatory synapses of these networks are depressing, the attractor states can be destabilized with increasing input. This leads to an itinerancy, where with either repeated transient stimuli, or increasing duration of a single stimulus, the network activity advances through sequences of attractor states. We find that the resulting network state, which persists beyond stimulus offset, can encode the number of stimuli presented via a distributed representation of neural activity with non-monotonic tuning curves for most neurons. Increased duration of a single stimulus is encoded via different distributed representations, so unlike an integrator, the network distinguishes separate successive presentations of a short stimulus from a single presentation of a longer stimulus with equal total duration. Moreover, different amplitudes of stimulus cause new, distinct activity patterns, such that changes in stimulus number, duration and amplitude can be distinguished from each other. These properties of the network depend on dynamic depressing synapses, as they disappear if synapses are static. Thus, short-term synaptic depression allows a network to store separately the different dynamic properties of a spatially constant stimulus.
short-term plasticity; dynamic synapses; attractor networks; short-term memory; distributed coding; high-dimensional representation
A quantitative computational theory of the operation of the hippocampal CA3 system as an autoassociation or attractor network used in episodic memory system is described. In this theory, the CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial, associations between any spatial location (place in rodents, or spatial view in primates) and an object or reward, and to provide for completion of the whole memory during recall from any part. The theory is extended to associations between time and object or reward to implement temporal order memory, also important in episodic memory. The dentate gyrus (DG) performs pattern separation by competitive learning to produce sparse representations suitable for setting up new representations in CA3 during learning, producing for example neurons with place-like fields from entorhinal cortex grid cells. The dentate granule cells produce by the very small number of mossy fiber (MF) connections to CA3 a randomizing pattern separation effect important during learning but not recall that separates out the patterns represented by CA3 firing to be very different from each other, which is optimal for an unstructured episodic memory system in which each memory must be kept distinct from other memories. The direct perforant path (pp) input to CA3 is quantitatively appropriate to provide the cue for recall in CA3, but not for learning. Tests of the theory including hippocampal subregion analyses and hippocampal NMDA receptor knockouts are described, and support the theory.
hippocampus; attractor network; competitive network; episodic memory; spatial view neurons; object-place memory; recall; pattern separation
The orderly formation of the nervous system requires a multitude of complex, integrated and simultaneously occurring processes. Neural progenitor cells expand through proliferation, commit to different cell fates, exit the cell cycle, generate different neuronal and glial cell types, and new neurons migrate to specified areas and establish synaptic connections. Gestational and perinatal exposure to environmental toxicants, pharmacological agents and drugs of abuse produce immediate, persistent or late-onset alterations in behavioral, cognitive, sensory and/or motor functions. These alterations reflect the disruption of the underlying processes of CNS formation and development. To determine the neurotoxic mechanisms that underlie these deficits it is necessary to analyze and dissect the complex molecular processes that occur during the proliferation, neurogenesis and differentiation of cells. This symposium will provide a framework for understanding the orchestrated events of neurogenesis, the coordination of proliferation and cell fate specification by selected genes, and the effects of well-known neurotoxicants on neurogenesis in the retina, hippocampus and cerebellum. These three tissues share common developmental profiles, mediate diverse neuronal activities and function, and thus provide important substrates for analysis. This paper summarizes four invited talks that were presented at the 12th International Neurotoxicology Association meeting held in Jerusalem, Israel during the summer of 2009. Donald A. Fox described the structural and functional alterations following low-level gestational lead exposure in children and rodents that produced a supernormal electroretinogram and selective increases in neurogenesis and cell proliferation of late-born retinal neurons (rod photoreceptors and bipolar cells), but not Müller glia cells, in mice. Lisa Opanashuk discussed how dioxin [TCDD] binding to the arylhydrocarbon receptor [AhR], a transcription factor that regulates xenobiotic metabolizing enzymes and growth factors, increased granule cell formation and apoptosis in the developing mouse cerebellum. Alex Zharkovsky described how postnatal early postnatal lead exposure decreased cell proliferation, neurogenesis and gene expression in the dentate gyrus of the adult hippocampus and its resultant behavioral effects. Bernard Weiss illustrated how environmental endocrine disruptors produced age- and gender-dependent alterations in synaptogenesis and cognitive behavior.
retina; cerebellum; hippocampus; lead; dioxin; endocrine disrupters
Cell fate reprogramming, such as the generation of insulin-producing β cells from other pancreas cells, can be achieved by external modulation of key transcription factors. However, the known gene regulatory interactions that form a complex network with multiple feedback loops make it increasingly difficult to design the cell reprogramming scheme because the linear regulatory pathways as schemes of causal influences upon cell lineages are inadequate for predicting the effect of transcriptional perturbation. However, sufficient information on regulatory networks is usually not available for detailed formal models. Here we demonstrate that by using the qualitatively described regulatory interactions as the basis for a coarse-grained dynamical ODE (ordinary differential equation) based model, it is possible to recapitulate the observed attractors of the exocrine and β, δ, α endocrine cells and to predict which gene perturbation can result in desired lineage reprogramming. Our model indicates that the constraints imposed by the incompletely elucidated regulatory network architecture suffice to build a predictive model for making informed decisions in choosing the set of transcription factors that need to be modulated for fate reprogramming.
Increase of the extracellular K + concentration mediates seizure-like synchronized activities in vitro and was proposed to be one of the main factors underlying epileptogenesis in some types of seizures in vivo. While underlying biophysical mechanisms clearly involve cell depolarization and overall increase in excitability, it remains unknown what qualitative changes of the spatio-temporal network dynamics occur after extracellular K + increase. In this study, we used multi-electrode recordings from mouse hippocampal slices to explore changes of the network activity during progressive increase of the extracellular K + concentration. Our analysis revealed complex spatio-temporal evolution of epileptiform activity and demonstrated a sequence of state transitions from relatively simple network bursts into complex bursting, with multiple synchronized events within each burst. We describe these transitions as qualitative changes of the state attractors, constructed from experimental data, mediated by elevation of extracellular K + concentration.
Potassium concentration; Burst; Epileptiform; Oscillations; Spatio-temporal; Network dynamics; Seizure