The last half of the 20th century and the first decade of the current one were characterized by the dominance of reductionist approaches to biology which were mainly driven by molecular biology. This type of reductionism was inspired by the influential 1944 book “What is life” by Erwin Schrödinger [S44
] who postulated that the chromosome formed an “aperiodic crystal” that is durable, an important prerequisite for hereditary matter. Schrödinger called it the “material carrier of life”. Parts of the chromosomes are formed by genes, which themselves are large, durable and responsible for the observed inheritance mechanism, thus making animate matter unique. Schrödinger’s ideas were driven by quantum mechanical reasoning applied to biology and were seminal in triggering the molecular biology revolution and lead to an increasingly gene-centric view of nature, a view further extended by another influential book, “The selfish gene” by Richard Dawkins [D76
]. However, now that the human genome has been decoded (see e.g. [HGS04
]), one may ask whether (a) knowing all parts of the system, can we fix or repair it if something goes wrong, and (b) can we put the parts back together?
The first question has been addressed by Yuri Lazebnik in several entertaining public lectures at systems biology conferences (e.g. ICSB Conference in Heidelberg 2004) and is summarized in his paper “Can a biologist fix a radio?” [L04
]. Lazebnik, an engineer, concludes that a more systematic and quantitative approach has to be adopted in modern biology while referring to general systems theory (GST) developed by Ludwig von Bertalanffy and others contemporaneous of Erwin Schrödinger [B68
]. The latter already pointed out that living organisms must have developed ways that let them defy the second law of thermodynamics constructing ‘order from disorder’ and allowing them to decrease their entropy by adding ‘negative entropy’ to the environment [S44
, p. 79ff]. von Bertalanffy took this idea further and argued that living systems are “open systems having a steady state” [B68
, pp. 39–40] and opened the door to an organicist view of biology. But only in recent years, accrued evidence is telling us that by understanding the parts of a system we do not necessarily understand the overall systems behavior (see diverse examples on complexity in [G08
This realization brings us back to the second question about reassembling the system even when knowing all its parts. Staying with the radio metaphor of Lazebnik, one would conclude that if we identified and carefully disassembled all parts of the system and recorded all their connections and positions we should obtain a blueprint of the radio. Next, we should be able to reassemble the system. This, however, does not imply that from the knowledge gained we would be able to repair or even modify the radio such that we would e.g. improve its reception. To accomplish this, the engineer would have to find, first, functional units that could be subsequently analyzed in isolation and in concert with other components to which it is connected. He might then find out that by enlarging the antenna, the reception of the radio might be substantially improved. Biological systems, however, are much more intricate than a man-made and designed apparatus like a radio, where all of its component can be studied in isolation under equilibrium conditions. Biological systems operate in non-equilibrium conditions and “comprise many interacting parts with the ability to generate a new quality of macroscopic collective behavior the manifestations of which are the spontaneous formation of distinctive temporal, spatial or functional structures…” thus matching the most commonly found definitions of complex systems (as defined by the editors of the Springer series “Understanding Complex Systems”, see also [G08
] for similar definitions). This has remarkable consequences and implications for the question “Can we put the parts back together?” which we will attempt to elucidate next.
Interpretations of ideas about complex systems have been discussed since the 1940s. Several new fields and theories carrying different names emerged from these discussions (e.g. Synergetics, Dynamic Systems Theory, Chaos Theory, Cybernetics, Tensegrity). A common denominator in all these areas is that even a system that consists of very simple parts that interact with each other in a non-linear fashion can exhibit complex systems-level or emergent properties, such as structure and organization. These properties are quite surprising and unexpected when one examines the properties of the individual parts alone. In other words, the system itself is more than just the sum of its parts [G08
]. Denis Noble, who followed up on the second question “Can we put the parts back together?” in his book “The Music of Life,” relates a telling anecdote about his attempts to mathematically model the oscillatory behavior of the heart. He was asked: “Mr Noble, where is the oscillator in your equations? What is that you expect to drive the rhythm?” Only decades later, he found the answer to this question: “Indeed, it is an eminently necessary question, if we are talking about some man-made, mechanical systems. But we are not. Instead, we can have a system that operates rhythmically and yet contains no specific ‘oscillator’ component. There is no need for one. The reason is that the rhythm is an integrative activity that emerges as a result of the interactions of a number of protein (channel) mechanisms.” (see p. 60, [N06
This explanation implies that the key to emergent phenomena and system-level properties of complex systems must lie in the interaction between the elements comprising the system. It is therefore intrinsically difficult to predict the future behavior of such systems as the interactions between the system parts are shielding their specific individual features from the system-level properties. Due to the lack of derivable laws, computational and mathematical tools are indispensable for complex system scientists, in general, or the systems biologist, in particular.
Networks and graphs
The above definition of complex systems consisting of interacting parts leads naturally to the use of mathematical tools based on networks or graphs where the individual parts translate to nodes and the interactions translate to edges or links. In his book “Linked” Albert L. Barabási summarizes the most common properties found among numerous naturally formed networks ranging from the Internet to social and gene regulatory networks [B02
]. When analyzing the network topology of these diverse complex systems, some important overarching rules emerge. It is not completely surprising that these networks deviate substantially from randomly built networks as studied by Paul Erdős and Alfréd Rényi [ER60
]. We therefore do not observe a bell-shaped frequency distribution of the number of links per node as expected from randomly formed networks; instead, we observe a power-law distribution, which is characteristic of small world or scale-free networks [AS00
]. This implies that a large majority of nodes have only a few links, whereas very few nodes have a large number of links. Those nodes are called hubs or connectors [B02
] and play a vital role in our understanding of, for instance, how diseases spread and epidemics can be stopped by targeting hubs identified in the network (e.g. [LE01
A scale-free network topology can be reproduced when dynamically constructing a network by adding nodes iteratively and linking them preferentially to already well connected (or fit) nodes in the existing network. This concept was termed “Rich get richer” by Barabási [B02
] and works analogous to increasing returns in economy, an idea hatched in the early 1980s by Belfast-born economist W. Brian Arthur to describe high technology. In such networks, two randomly picked nodes are usually connected by a quite short path (sequence of links to neighbors) which is another characteristic of small world networks (“Six degrees of separation”, [B02
]). Although the underlying topology makes the network vulnerable to direct attacks to hubs, the resulting network is very robust against random perturbations [CN00
]. Many naturally growing networks also exhibit various level of modularity where sub-clusters are more strongly connected with each other (e.g. cortical networks, see e.g. [KG07
]) describing a hierarchy of scale-free networks where hubs connect the different modular layers thus conserving the overall scale-free network topology [RS02
The above-referred observations contributed to the fast rise of systems biology. However, we are skeptic of the view voiced by Barabasi in [B02
] that by establishing the “map of life” that describes the complete metabolic (biochemical), regulatory (gene or protein interaction) and cellular networks of an organism, we will hold the keys to an understanding of how an organism works despite the fact that scale-free networks are an emerging feature of various complex systems or networks. We share the concern of Yaneer Ban-Yam who says that “[t]he biggest current danger to the field [of complexity] is that it will be hijacked by people who don’t understand the essence of the field. Many are adopting the terminology without understanding what complex systems are really about. Systems biology, systems engineering and other systems related fields are often (but not always) just using the words but continuing a reductionist approach.” [B02
, p. 15ff] It is reductionist to believe that by understanding the interactions between the molecules contained in a cell we will be able to understand how the cell works, a tissue is formed or cancer arises; these assumptions are only driven by upward causation in the “map of life”. The upward causation assumption completely neglects the contribution of the environment and of the emergent structure itself (by downward causation).
Although scale-freeness emerges in complex networks like the Internet, the World Wide Web, social and biological networks as well as larger parts of the modern, globalized economy, it is not a universal feature of complex systems [K05
]. Other network topologies emerge also naturally not showing small world properties. One prominent example is a road network connecting cities. In this case, each node is not only a point but has a certain size, cannot freely move and roads themselves (or edges) are restricted by the topographic and geological settings. This implies that having spatial constraints limiting the dynamic construction process can yield different network topologies. Therefore, it seems important to include spatial localization information when building gene regulatory networks or protein-protein interaction maps given that a substance can only react with another substance when both reside in the same spatial compartment [D03
So how do domain boundaries emerge in complex systems? Could these boundaries relate to the individual modules found in hierarchical networks? Is there a correlation between functional units and compartmentalization? Is there a form-function relationship to be found in living systems? Although we are yet unable to answer these questions, it is worth making a simple thought-experiment by revisiting Lazebnik’s radio example described above. We can ask ourselves whether it makes sense to decompose the whole radio down to its molecular constituents so to understand its workings. Going back to Schrödinger’s ‘order from disorder’ principle, we would certainly suspect that there might be a correlation between spatial domains and functional units. Next, one might consider looking at the apparent, spatial patterns visible in the radio and hope that these units can be studied in isolation. In the case of an individual cell, this would mean that the cell’s anatomy should be taken into account, looking at sub-cellular compartments [H05
] and the protein interactions therein giving rise to protein clusters potentially describing functional regions (the toponome, see [SB06
]). In the case of tissues or organs, we could first try to focus, for instance, on understanding how typically found patterns in glandular tissues are formed (e.g. acini and ducts, see [KM08
So, how are complex spatial patterns formed? Suppose that we can find an explanation for at least one complex system that is exclusively composed of simple, inanimate compounds such as atoms or molecules where, obviously, no overall blueprint exists nor can be executed. In this case, one will have to accept that the ‘order from disorder’ principle is also applicable to systems consisting of much more complex parts, like those found in biological systems ranging from bacteria to multicellular organisms, where interactions are not only governed by physical laws, but by more complex physiological and behavioral responses [CD01
]. The simple answer to the above question is… through self-organization.
Although rather unknown and not well studied when Schrödinger wrote his book, several very simple self-organizing systems have been since discovered not only in physics and chemistry showing stunning emerging spatial patterns (see e.g. the rock formation of the Giant’s Causeway in , soap bubbles that build when a flask of dish-washing detergent is shaken, the well-known Benard convection [CD01
] or some more recent finding on the physics of Type-I superconductors [P07
]) underpinning the fact that self-organization might also be present in more complex systems (as is shown in [CD01
]). As physical laws rule the interactions between the parts in physical systems, we can exclude alternative explanations of pattern formation that require intervention from outside the system, such as (i) the presence of a leader, (ii) the existence of a blueprint, (iii) the execution of a recipe, or (iv) the use of a template [CD01
]. Although (i)-(iv) are relevant to biological systems, self-organization is certainly an option when it comes to explaining biological pattern formation where “the rules in self-organizing systems can be quite economical in the physiological and behavioral machinery needed to implement them” [CD01
, p. 63]. This simplicity might give self-organization an evolutionary advantage over the alternative solutions (i)-(iv), making it more prevalent in biological systems. Having said this, it is certainly possible that a mixture of these mechanisms is present in the same system.
Figure 1 Example of self-organization of inanimate matter: left panel shows an overview of the rock formation found at the Giant’s Causeway in County Antrim, Northern Ireland. The right panel shows a close-up photographed downwards onto the rock formation (more ...)
Let us consider the alternative explanations (i)–(iv) first and then, see if they are applicable when it comes to tissue (or organ) morphogenesis: The presence of a leader (i) can almost certainly be excluded as we are not aware of any molecular mechanisms that would enable a single entity to receive all the information signaled from all other cells and instructing them to perform certain actions as a result of processing the incoming information. The first mathematical model proving that an aggregation of single-celled units into larger cooperative entities can be explained without requiring a leader, such as a founder or pacemaker cell, was published by Evelyn Keller and Lee Segel in 1970 [KS70
] for the slime mould (Dictyostelium discoideium). Furthermore, it seems unlikely that a template (iv) is used when cells aggregate to form tissues since tissues can be grown in vitro
without the presence of any template structure. This brings us to the alternative explanation requiring the existence of a blueprint (ii) that describes the parts and the spatial layout of the tissue to be built. Such blueprint does not, however, describe how tissue is to be built and consequently requires each cell to have a global picture of the tissue being formed at any point in time. This seems very unlikely as there is no known molecular mechanism conveying such information to each individual cell. This then leaves the remaining option that each cell is following a strict recipe (iii) describing a set of instructions to be carried out. Although such set of instructions might explain how an individual like a spider builds a cocoon for its eggs [E70
], it is unlikely that each cell can follow and execute each of the encoded rules independently of the crowded environment present in a tissue or organ. Since cells can only sense their local environment, the emergence of tissues can only be driven by rules governed by coordinated interactions with the local environment of each cell. This leads to the conclusion that the dynamic process of tissue formation must mainly be governed by self-organization.
How does self-organization work? First, the components need to be able to interact with or get feedback from other neighboring components, but also from the local environment or from the emerging structure itself (stigmergy, [CD01
, p. 56]). In the case of tissues, this would correspond to interactions with other cells, nutrients and the extracellular matrix. This feedback can be either negative or positive. It turns out that positive feedback is prevalent in self-organizing systems as it leads to aggregation, but bears the risk of overamplification. In order to control and stabilize positive feedback mechanisms, negative feedback is needed. This feedback can either be built into the system (e.g. cells get quiescent) or be offered through physical constraints (e.g. cell-migration depends on forces exerted by the extracellular matrix). Components of such system can interact with each other using either cues that specifically convey information (e.g. like ants when leaving a trail of pheromones leading to their food source) or cues that convey information incidentally (e.g. like a deer leaving a trail when walking through the wood, see also [CD01
]). In cellular systems, we observe biochemical (e.g. morphogen gradients generate diverse cell types in distinct spatial regions) as well as biomechanical cues (e.g. fibroblasts degrading collagen fibers giving way to epithelial cell-migration).
Self-organizing systems are usually very stable over a large range of parameters, but can exhibit sudden and abrupt changes in the emergent pattern due to minimal changes of one or more parameters thus moving from one stable state to another or showing criticality at the edge of chaos (see [NP08
] for a biological example). If we now classify phenotypes for one species according to the emergent patterns observed, we can observe a change in phenotype close to the bifurcation by altering only the parameters governed by the environment (e.g. the raid patterns of army ants [DG89
]). This implies that the same genotype can exhibit different phenotypes depending on the environment.
Environmental determination of the phenotype was first documented at the end of the 19th
century in Lepidoptera
. The European map butterfly exhibits strikingly different wing phenotypes depending on the season of eclosion of the butterflies: while the spring morph shows orange wings with black spots, the summer morph is black with a white band. This dimorphism misled Carl Linnaeus, the father of taxonomy, to classify the morphs as distinct species. In 1875, by incubating the caterpillars in different conditions, August Weissman found that the seasonal pattern of the wings of certain butterflies is temperature-induced. Indeed, the discipline of Ecological Developmental Biology deals with this phenomenon, called polyphenism, and other aspects of environmental determination of the phenotype [GE09
]. These phenomena were mostly ignored by mainstream biologists under the spell of genetic determinism. However, the discoveries of hormonally-active man-made chemicals and that human adult diseases often have their origins during fetal life has greatly contributed to the revival of the eco-devo tradition [SS10