We begin our survey of relevant facts with a comparatively simple, and abundantly well studied, case of the systems known as neural networks (NN); for a background of biological origins, terminology, and a historical perspective, see.^{17}^{,}^{18} NN functionality originates from and closely mimics the neuronal networks constituting the nervous systems of higher organisms populating the Earth. In mathematical terms, NN is a multidimensional nonlinear function, **F(x)**, constructed as follows. Let **x** be an N-dimensional input vector serving as an argument, and **α** + **Wx** be a linear transformation of **x** with *α*^{(}^{M}^{)} being a vector of biases, and **W**^{(}^{N}^{*}^{M}^{)} being a matrix of input weights. Let *σ*(*z*) be a univariate function with *dσ*/*dz* > 0, which in NN terminology is usually referred to as an activation function. We introduce a set of nonlinear transformations **z** = σ (*α* + **Wx**), which are traditionally seen as belonging to the hidden layer of NN; M and **z**^{(}^{M}^{)} are said to be the number of neurons and the vector of outputs of the hidden layer, respectively. Finally, we perform a linear transformation **y** = **Uz** with **U**^{(}^{L}^{*}^{M}^{)} being the matrix of output weights and vector **y**^{(}^{L}^{)} being the output of NN. Thus, in the component-wise notation, **y** = **F(x)** means

As is well known, a fundamental property of NNs is that they are the

*universal approximators*; that is, given a sufficiently large dimension of the hidden layer, an NN may approximate, per appropriate adjustment of parameters, any smooth multidimensional nonlinear function with any prescribed accuracy.

^{19} The core NN structure (

1) may be extended in many directions. In particular, the function

**F** may be applied iteratively, thus producing the NNs with two or more hidden layers. It is also possible to connect the input and output layers directly thus bypassing the nonlinear transformation produced by the hidden layer

where *V*^{(}^{L}^{*}^{N}^{)} is the shortcut matrix.

Among the analytical tools collectively known as artificial intelligence, NNs retain the leading positions in a variety of computational tasks; among them are pattern recognition and classification, short- and long-term storage of information, prediction and decision-making, optimization, and other. It is beyond the scope of this paper to elucidate all the numerous aspects of the artificial intelligence of NN; comprehensive reviews may be found in.^{18}^{,}^{20} Our goal here is different: we would like to highlight a tight analogy between the properties of NN and those of a large community of identical simpleminded (dumb) individuals devoid of any personal intelligence. All of the aforementioned intelligent tasks being solved by the NN as a whole are performed through the learning/training process, which consists in specializing the parameters *α***,U,V**. These parameters quantify the signal transduction within the NN. Notably, the units in the hidden layer (neurons) remain unchanged by learning or training; they are capable of only one operation governed by a simple rule of transformation: given an input signal, (stimulus), *z*, they produce an output signal (response), *σ*(*z*). The internal mechanism of the neurons does not allow them to have any impression of the environment or general structure of the NN. In this sense, they are *dumb* individuals. Each element in the input layer, *x*_{i}, represents a certain aspect of the environment, and neurons process some integral visions of them, that is, (*α* + **Wx**). Output signals, *y*_{k}, represent the *solution* of the problem, that is, the integral reaction of the NN to the totality of environmental stimuli.

Obviously, the above outlined organization is not something that may be implemented only as a computational procedure. In principle, any ensemble of individual units acting in accordance with stimulusresponse rules, (eg, electronic circuits, robots, insects, cells) may be organized into a structure similar to NN. By the very logic of the NN paradigm, these communities of individual units may possess similar intelligent capabilities *as a whole* without being intelligent by themselves. The best examples of the kind are the biological neural (ie, neuronal) networks.

Cognitive/analytical capabilities of NN may be grossly amplified if instead of univariate activation, *z* = *σ*(*x*), a more sophisticated algorithm is employed. In such an enhanced NN, internal mechanics of *i*th neuron may be described by the system of nonlinear differential equations

where {

*σ*_{i}} is the set of neuron-specific activation functions, {

*x*_{n}}

_{i} is a batch of signals transmitted by the

*i*th element of the input layer, and {

*z*_{i}}

_{j} is a batch of output signals. In engineering, this type of NN is known as cellular neural network (CNN). In the CNNs, the neuron’s output is no longer a static signal; depending on the parameters in (

3), a neuron may generate delays, periodic oscillations and chaotic dynamics. It is worth noting, however, that the more complex neuron’s functioning in the CNN still belongs to the realm of stimulus-response rules; the difference with ordinary NNs is that neurons in CNNs are capable of receiving multichannel stimuli and producing multichannel delayed responses. Therefore, similar to those in NN, these more sophisticated neurons should still be regarded as

*dumb individuals*, although with a much more elaborate internal organization. Since their discovery in 1988,

^{21} it has been expressly demonstrated that CNNs are highly efficient in solving a variety of complex problems in artificial intelligence such as pattern recognition, image analysis, multiple parallel computation, and others. In particular, it has been proven that CNN consisting of

*n* cells may store up to 2

^{n} stable memory patterns,

^{22} which is an enormously large number even for a CNN of modest size. In biological world, a close analogy to CNN are the networks of somatic cells (as the very name of CNN would suggest). In fact, the biological cells are even more powerful signal-processing machines; each somatic cell is embedded in the extracellular matrix and communicates with its neighbors through numerous signal transduction pathways. A cell does not process each signal individually; instead, it reacts to the totality of signals as a whole by modification of its internal states (eg, metabolic and gene expression profiles). Therefore, it would not be a big leap of imagination to hypothesize that a community of somatic cells as a whole can possess the problem-solving skills, collective memory, and other faculties of swarm intelligence, at least at the level of sophistication comparable to CNNs in engineering. Neuronal cells of nervous systems are not unique in these capabilities.

Due to the fundamental property of being universal approximators, the NNs are capable, in principle, of representing any non-linear dynamical system. Let

**G** be a dynamical system governed by the equation

*d***x**/

*dt* =

**F** (

**x|θ**), where

**x**^{(}^{N}^{)} (

*t*) is the time-dependent vector of states,

**F**^{(}^{N}^{)}(•) is a multidimensional nonlinear vector-function, and

**θ**^{(}^{L}^{)} is the set of system’s parameters. In particular, function

**F** may be structured in accordance with (

2)

These systems are usually referred to as dynamic NNs. An important subclass of these systems is the one in which matrix

**V** is diagonal with all the diagonal elements being negative,

*V*_{i}_{,}_{j} = −λ

_{i}*δ*_{ij}. In this case, the dynamic NNs are called

*competitive*. Competition between natural extinction presented by the first term in (

4) is dynamically balanced with reproduction presented by the second term. Dynamic equilibrium between these two tendencies, if it does exist, may produce very complex behaviors. In particular, the system may possess a number of asymptotically stable attractors. This means that starting with a large variety of initial conditions belonging to a certain basin of attraction, the system may evolve towards one of the several well-defined stable manifolds. This process is in fact nothing other than

*classification* of initial states, which occurs in the system without any organizational force or supervisory authority. To this end, the next key question to ponder is whether or not a system of class (

4) has asymptotically stable attractors and what the structures of these attractors can be. This question has been extensively studied within a wide class of dynamical systems which are discussed in the following sections.