|Home | About | Journals | Submit | Contact Us | Français|
Brain cortex activity, as variously recorded by scalp or cortical electrodes in the electroencephalography (EEG) frequency range, probably reflects the basic strategy of brain information processing. Various hypotheses have been advanced to interpret this phenomenon, the most popular of which is that suitable combinations of excitatory and inhibitory neurons behave as assemblies of oscillators susceptible to synchronization and desynchronization. Implicit in this view is the assumption that EEG potentials are epiphenomena of action potentials, which is consistent with the argument that voltage variations in dendritic membranes reproduce the postsynaptic effects of targeting neurons. However, this classic argument does not really fit the discovery that firing synchronization over extended brain areas often appears to be established in about 1 ms, which is a small fraction of any EEG frequency component period. This is in contrast with the fact that all computational models of dynamic systems formed by more or less weakly interacting oscillators of near frequencies take more than one period to reach synchronization. The discovery that the somatodendritic membranes of specialized populations of neurons exhibit intrinsic subthreshold oscillations (ISOs) in the EEG frequency range, together with experimental evidence that short inhibitory stimuli are capable of resetting ISO phases, radically changes the scheme described above and paves the way to a novel view. This paper aims to elucidate the nature of ISO generation mechanisms, to explain the reasons for their reliability in starting and stopping synchronized firing, and to indicate their potential in brain information processing. The need for a repertoire of extraneuronal regulation mechanisms, putatively mediated by astrocytes, is also inferred. Lastly, the importance of ISOs for the brain as a parallel recursive machine is briefly discussed.
One of the most fascinating hypotheses regarding the strategy used by the mammalian brain to perform parallel information processing goes under the name of brainweb, coined by Francisco Varela et al. in . The basic concept is that large-scale phase synchronization episodes, as seen in electroencephalography (EEG) signals recorded over multiple frequency bands, are not mere epiphenomena of neuron firing but play a primary role in mediating transient interactions among neuronal assemblies. In this view, neurons connected through monosynaptic or polysynaptic pathways are imagined to be driven by EEG synchrony to a state of reciprocal communication. This is consistent with the views of other authors (e.g., [2, 3]) who have also interpreted EEG activity as the manifestation of intrinsic subthreshold voltage oscillations (ISOs) generated by the somatodendritic membranes of specialized populations of neurons —for instance, the mitral cells of the mammalian olfactory bulb .
To understand how and why EEG signals can be generated by a population of ISO-supporting neurons, even in the absence of neuronal spiking, one should recall that the electrical activity detected on the scalp or cortex primarily reflects mean electrical currents induced in the extraneuronal medium by the voltage gradients of somatodendritic structures. This occurs because the membrane area of these structures is very large compared with that of axons and also because axons are almost completely screened by myelin sheaths and electrically active only at the nodes of Ranvier. Thus, what is essential in the generation of EEG signals is that the voltage gradients of somatodendritic membranes, whether caused by synaptic bombardment or intrinsic mechanisms, oscillate in phase over extended neuronal populations.
This vision is not only extraordinarily rich in implications regarding the nature of EEG and consequences for future research but is probably also capable of providing the key to a wide repertoire of oscillatory behaviors, as detected in various experiments on the neurodynamics of cognition, particularly of olfaction [6–9]. What seems to be missing from this scenario is a clear understanding of the mechanisms that switch ISO synchronization on and off, as well as the extraordinary efficiency of neuronal connectivity in providing selective interactions among synchronized populations of neurons. This paper aims to fill this gap by illustrating some results from a computational model of ISO kinetics and ISO phase control and by briefly describing the principles of a novel method of parallel processing programming.
During the last 15–20 years, ISOs have been observed in excitatory neurons of specialized neuronal populations of mammalian brains [5, 10–12] and models of ISO generation mechanisms closely matching with experimental data have been proposed during the last 15 years [12–15]. All of them describe ISO kinetics as governed by simplified variants of Hodgkin–Huxley equations, in that a suitable combination of fast persistent sodium channels and slow persistent potassium channels, working over suitable voltage ranges, is enough to generate sustained voltage oscillations.
The theoretical possibility of this phenomenon had been predicted since the late 1960s [16, 17] but neglected for many years, as only the nonpersistent voltage-dependent sodium channels of axons were known. The discovery that fast persistent voltage-dependent sodium channels (NaP) exist in nonaxonic neuron membranes is relatively recent [18, 19].
Almost all models differ mainly in their parameters, which are tuned to account for the characteristics of known ion channels and the dependence of ISO amplitude and frequency on membrane voltage.
The model presented here differs from others in that the parameters are tuned to reproduce approximately the properties and behaviors of the ISOs observed by Desmaisons et al.  in the mitral cells of rat olfactory bulb. The novelty of the present paper lies not in the details of the model but in analysis of the mechanisms that are presumed to be effective in driving ISO synchronization and desynchronization. More realistic models, accounting for certain peculiarities of channel kinetics, can probably interpret ISO phenomena much more faithfully, but at the cost of lesser clarity.
Any good model of ISO generation should also, of course, be able to represent the phenomenon of action potential (AP) triggering. Some authors have obtained this by combining the equations for ISOs and APs in a single system (e.g., ). In doing this, they implicitly assumed that the ion channels involved in ISO and AP generation mechanisms are located in the same cell membrane.
In this paper, in agreement with most findings on persistent sodium channel location, the triggering process is explained much more simply by assuming that ISO kinetics is confined within the somatodendritic membrane and thus separate from AP kinetics. In these conditions, the membrane voltage at the neuronal soma may be described as a linear combination of the voltage variations respectively induced by ISO and AP kinetics. In practice, ISOs are presumed to be effective in priming APs because they raise the voltage of the soma membrane at the neuronal hillock up to the AP firing threshold. In this way, the ISO kinetics of the dendritic membrane is assumed to be little affected by the AP rebound. However, the rebound may be effective at the soma membrane, particularly at the hillock, to the extent that it can significantly alter the triggering capability of the ISO immediately after axon discharge. Most importantly, the negative after-spike transient rebounding into the neuronal soma prevents the ISO voltage oscillation from eliciting other APs immediately after overcoming the firing threshold (see Section 5).
In this paper, NaP channels are represented as two-state devices with opening probability instantaneously depending on membrane potential VM, according to the formula:
with αNa = 0.125 mV−1 at physiological temperature, and V0Na=−51.25 mV (Fig. 1a).
As regards potassium current, although a few types of potassium channels with more or less sophisticated kinetics are presumed to be involved in ISO generation, those of muscarinic type seem to be the right choice [20, 21]. For the needs of our model, however, a simple time-delayed voltage-dependent two-state channel (K channel) was used, with opening probability PK PK(VM, t) governed by a kinetic equation of the form:
In this formula, t is time, P0 K(VM) is opening probability at rest, and τ(VM) the voltage-dependent time constant. The theory of channel kinetics  yields:
The model was implemented with αK = 1/6.5 mV−1, V0 K=−34 mV, βK = 0.33·αK, and τ0 = 25.35 ms, yielding the profiles shown in Fig. 1b. The values of these constants are not as critical as it may seem, since they were modified and balanced to produce oscillations of ~40 Hz and 3–5-mV amplitude when membrane voltage was held at VM−55 mV.
Once the opening probabilities of the channels are known, we can calculate the outward electrical sodium current INaPINaP(VM) and the outward electrical potassium current IKIK(VM, PK) per unit cell membrane area, respectively, as:
where GNaP is the maximum conductance per unit membrane area of the sodium channels, ENa the Nernst potential of sodium, GK the maximum conductance per unit membrane area of the K channels, and EK the Nernst potential of potassium. The values assumed in the model were: GNaP = 0.0541 μS/cm2, ENa = 55 mV, GK = 3.51 μS/cm2, and EK=−90 mV.
The IK/INaP ratio is not as large as it may first appear, since, as will be seen, K channels operate over a range of small opening probability. In addition, as may be easily checked, INaP is negative and IK is positive in the interval EK<VM< ENa, and the profile of INaP has a negative slope below a certain VM value. The membrane voltage equation can then be written as:
where CM is the membrane capacitance per unit membrane area and IC(t), called the control current, is an inward electrical current per unit membrane area. The latter was introduced in order to control the working potential of the system. In the model, the values assumed for these quantities were CM = 1 μF/cm2 and IC (t)3.25±1 nA/cm2.
Equations (2) and (5) form a system of differential equations for a nonlinear dynamical system with states represented by variables PK and VM. The system can either be solved separately for PK(t) and VM(t) in the time domain or represented as a point moving on the plane VM×PK. All solutions presented in this paper were implemented in both ways by Matlab routines (The MathWorks, Inc., 2007) freely available from the author upon request.
An approximate explanation of why a combination of channels like the one described above is capable of generating sustained oscillations is the following. Because of the delayed responsiveness of electrical potassium currents to voltage steps, the population of K channels behaves as a phenomenological inductance LK in series with a positive resistance RK . In cotrast, within a limited voltage range in which INaP(VM) has a negative slope, the sodium channel population contributes to the effective impedance of the membrane with a “negative” resistance RNaP(VM) dVM/dIK. Thus, the two channel populations, in series with the membrane capacitance, form a sort of RLC circuit with R ranging from negative to positive values.
The voltage V0M, at which the total electrical current vanishes, is the equilibrium voltage for the system. If the absolute value of RNaP is too small, the equilibrium is stable, but if, in the neighborhood of V0M where RNaP is negative, it is large enough to overcompensate for RK, the circuit generates sustained oscillations of frequency and limited amplitude.
Since the system formed by (2) and (5) is nonlinear, oscillatory behaviors are expected to occur only within particular parameter domains. The kinetics of the system can be well represented on the plane VM ×PK by the state velocity field with local components dVM/dt and dPK/dt. This is called the phase portrait representation. Figure 2 shows the directions of the velocity vectors as small bicolored segments, oriented from light to dark gray, winding around a central region. The whole set of these oriented segments is partitioned into four quadrants by a pair of curved lines, called nullclines, which meet at a point S = (V0M, P0 K). One of these lines is the set of points where the velocity direction segment is vertical (dVM/dt = 0), and the other is the set where the velocity direction segment is horizontal (dPK/dt = 0). Hence, S is the point at which state velocity is zero.
The trajectory of a state starting from any point of the phase portrait (different from S) can be obtained by connecting a sequence of contiguous velocity direction segments (exemplified by two arrowed spiraling lines in Fig. 2).
If the system does not admit oscillatory solutions, all trajectories converge directly toward S along spiraling lines. In this case, S is a state of stable equilibrium. Otherwise, the trajectories approach an ellipsoidal curve centered on S, called the limit cycle (Fig. 2). In this case, S is a point of unstable equilibrium. This is called the stagnation point of the phase portrait, as a trajectory starting from a point infinitely close to S takes an infinite amount of time to reach any point at a finite distance from S inside the limit cycle.
In fact, since in practice the system is exposed to membrane noise (both thermal and synaptic), sooner or later a state found at S will be unpredictably displaced to a point close to S, from which it will start to approach the limit cycle in a finite, but quite indeterminate, time interval. As a consequence, also the final phase of the voltage oscillation is indeterminate.
By varying the parameters of the system slightly, the oscillatory regime can be more or less significantly altered. In particular, transitions from stable to unstable equilibrium, or vice versa, can be obtained by varying the control current IC(t). Remarkably, we deduce from (2) that nullcline dPK/dt = 0 is exactly the profile of the function P0 K(VM). Since this does not depend on the control current IC(t), the stagnation points of all limit cycles are found on this nullcline and always move along it under any adiabatic change in IC. By contrast, nullcline dVM/dt = 0 moves respectively up or down as IC increases or decreases with considerable change of limit cycle size and period. More precisely, for hyperpolarizing currents (IC small or negative), the mean radius of the limit cycle shrinks progressively to zero while S moves in the direction of decreasing values of PK; thus, below a certain level, voltage oscillations become impossible. Vice versa, for depolarizing currents (more positive values of IC), the mean radius of the limit cycle increases first to a maximum and then decreases and shrinks to zero, while S moves in the direction of increasing values of PK. These effects qualitatively reproduce the behavior of real ISOs quite well.
One important property of the nonlinearity of ISO kinetics is the possibility of controlling the ISO phase by applying inhibitory stimuli. In experiments on the mitral cells of the rat olfactory bulb, Desmaisons et al.  discovered that short-lasting (phasic) hyperpolarizations, as may be caused by inhibitory interneurons or artificial stimuli, reset the ISO phase to a characteristic value with a precision of 1 ms, while transitorily and modestly enhancing the ISO voltage amplitude (rebound).
This effect was perfectly reproduced in the model by injecting a time-dependent negative current, simulating the sudden opening of a set of chloride channels. Extensive simulations of this action unveiled the reason for such remarkable precision. The mechanism is explained in Fig. 3. Let a short-lasting flow of chloride ions enter the cell membrane while the ISO state is running along the limit cycle. Then, due to the delayed responsiveness of K channels, which determines a sort of inertial behavior, neither the shape nor the position of the limit cycle is particularly affected. If the flow is sufficiently intense, the membrane potential VM is rapidly driven close to the chloride Nernst potential, say, ECl−65 mV (vertical line in Fig. 3). For the sake of graphical simplicity, without altering the validity of the example, we assume that VM is instantly driven exactly to ECl. Clearly, the maximum time differences among paths reaching this line from any point of the limit cycle are, respectively, those starting from the points of maximum and minimum PK.
As shown in the figure, this is precisely the time interval taken by the state to transit from a to b along the small arc intersecting line VM = ECl. Since the K channel time constant τ (VM) at VM=−65 mV is about half that at VM=−55 mV (see Fig. 1b), and since, in the case considered here, the voltage amplitude of the limit cycle is about 4 mV, the transit time from a to b may be estimated at about 1/16 of the limit cycle period, i.e., less than 1 ms for an ISO of 40 Hz. We can easily imagine how effective this behavior can be in synchronizing the firings of a population of ISO-supporting neurons with similar characteristics.
Let us assume that, initially, the ISO phases of different neurons of near frequencies are randomly different. A set of phasic inhibitory stimuli simultaneously delivered to all neurons will have the effect of synchronizing all ISO phases to the precision of less than 1 ms. Now, let us assume that, immediately after the inhibitory action, the ISOs are driven by a common excitatory input to the firing threshold of the neurons. All neurons will then start firing synchronously for a certain time interval, depending on phase scattering. The state of synchronized firing may even persist indefinitely, if the neuron population is allowed to feed back weakly on itself via fast inhibitory interneurons, perhaps even through polysynaptic pathways. This happens because of the rebounding effect of phasic inhibitory stimuli.
Synchronized spiking is useful not only in providing the temporal ordering of nervous information flows but also in ensuring the effectiveness of the neuronal population spiking on target neurons (even through polysynaptic pathways). There is indeed general agreement that unsynchronized spiking is ineffective or poorly effective in brain communication. It is also quite evident that, if a target neuron supports an ISO which is synchronous with those of the source neurons, the effectiveness of synchronized spiking on the target neuron will be even more effective. A synchronized volley of synaptic inputs is indeed more effective when the membrane voltage of the target neuron is maximal.
In carrying out simulations, a second important phenomenon was discovered, shown in Fig. 4. Inhibitory stimuli lasting slightly longer than the limit cycle period, overlapping on membrane noise of moderate level, produced the opposite of the effect produced by a phasic inhibitory stimulus, namely, temporary quenching of ISO amplitude and resetting of its phase to a random value.
The mechanism of this curious effect is as follows: during a long-lasting (tonic) inhibitory stimulus, the membrane potential remains close to the Nernst potential of chloride for long enough for the stagnation point of the limit cycle to evolve towards the configuration corresponding to the probability at rest, P0 K(ECl). Since the mean radius of the limit cycle vanishes in this region, the state initially evolves without oscillating along a line depending on the delay of the K channel response and, when PK(t) approaches the resting value P0 K(VM) (see (2)), the state reaches a stagnation point Q on the nullcline dPK/dt = 0 (Fig. 4). Afterwards, the state continues to travel along this line up to stagnation point S of the limit cycle in its original configuration. It would then remain there indefinitely, if membrane noise did not displace it from S, thus allowing it to evolve toward the limit cycle. Since the phase of the restored oscillation is randomly affected by membrane noise, the poststimulus phase is totally unrelated to the prestimulus phase.
The model was also used to simulate the effects of ISO amplitudes reaching the AP threshold at the neuron hillock. Since, as explained in Section 2, the mechanisms of ISO kinetics are assumed to be almost completely decoupled from those involved in AP kinetics, the ISO voltage profile, as detected at the somatic membrane, is assumed to be influenced by the axon discharges according to a simple additive law. Accordingly, combined voltage profile diagrams were obtained by adding spike-like profiles to the ISO profile at firing threshold intersections.
Figure 5 shows the time domain membrane voltage profile resulting from a model of an ISO-supporting neuron, as perturbed by low thermal and synaptic membrane noise under the action of a few inhibitory and/or excitatory stimuli. The left part of Fig. 5 describes the behavior of the voltage oscillation before and after the application of a phasic inhibitory stimulus (a), putatively simulating the effects of GABA-A receptors, and after a tonic inhibitory stimulus (b), putatively simulating the effects of GABA-B receptors.
In Fig. 5, the inhibitory and excitatory currents generated by these stimuli are drawn at arbitrary scales and levels, respectively, above and below the ISO and threshold voltage profiles. Clearly, phasic stimulus a immediately results in an ISO phase-resetting event, as found experimentally, whereas tonic stimulus b results in the quenching of the ISO oscillation, followed by restoration of the oscillatory regime with a random phase (speed depending on the level of thermal and synaptic noise). In the model, this noise was simulated by adding a white-noise-like term of moderate amplitude to the control current, so as to displace the stagnation point erratically by 0.5–1 mV. The latter phenomenon awaits experimental confirmation.
Note that the current profiles in Fig. 5 show modulations which presumably do not correspond to those expected in the absence of ISOs. This is because the inhibitory stimuli are presumed to activate chloride channel opening, so that the chloride currents themselves are influenced by membrane voltage variations.
The right-hand part of Fig. 5 shows, at c, the effect of a phasic inhibitory stimulus accompanied by a tonic excitatory stimulus, putatively simulating the postsynaptic effects of a combination of α-amino-3-hydroxyl-5-methyl-4-isoxazole-propionate and N-methyl-d-aspartic acid receptors. This association has the effect of resetting the ISO phase and increasing amplitude up to the firing level, thus starting the spiking regime. In the subsequent time interval, the ISO voltage profile appears to be modified by APs, primed by the enhanced ISOs in a purely additive way for the reasons explained above.
To ensure that synchronized firing lasts indefinitely, a small self-inhibitory action, putatively exerted via a feedback interneuron, was introduced in the model. Its effect is visible as a sequence of small notches in the inhibitory current profile in the region c–d. At d, a tonic inhibitory stimulus suddenly quenches the firing regime, permanently bringing the maxima of the ISO profiles below the firing threshold.
The effects exemplified in Fig. 5 show how inhibitory and excitatory stimuli can be used to control the firing regimes of a population of neurons supporting ISOs of near frequencies. Let us imagine that all the neurons of the population are simultaneously targeted by a fan of phasic inhibitory stimuli. Distributed events of this sort may be delivered, for instance, as corollary discharges promoted from the limbic regions of the brain in preafferent processes  or driven by the inhibitory neurons of the reticular thalamic nucleus during sensory input. This general action immediately results in synchronization of all ISO-supporting neurons. Even if the neurons do not fire, the voltage oscillations of all their dendritic membranes may generate EEG transients shaped as damped oscillations, which are extinguished more or less rapidly, depending on ISO phase dispersion. This is reminiscent of the short evoked potential episodes often observed in EEG patterns.
Now, let us imagine that, immediately after the ISO synchronization event, a subpopulation of neurons simultaneously receive a set of excitatory stimuli—for instance, from sensory receptors. All the neurons of this subpopulation then start firing synchronously. This regime persists indefinitely if the neurons feed back weakly on themselves via local inhibitory interneurons, as is the case of the single ISO-supporting neuron shown in Fig. 5, time interval c–d. The firing regime can be stopped at any moment by targeting all ISO-supporting neurons with a fan of tonic inhibitory stimuli.
Clearly, therefore, a homogeneous population of ISO-supporting neurons, no matter where they are located in the brain, can be switched on and off, thus making them encode a set of endogenous or exogenous simultaneous stimuli into a corresponding set of synchronized firings. This process is impressively reminiscent of the phenomenon described by Freeman et al. [6, 7, 9, 22]. Curiously enough, ISO phenomena in various stimulation conditions externally detected as mean fields, i.e., as EEG voltage variations, are likely to exhibit the same features as signals generated by dynamical chaos models.
That the switching mechanisms described above may play an important role in brain communication is suggested by the fact that only a population of synchronously firing neurons is effective in activating other neurons via monosynaptic or polysynaptic pathways. By contrast, the postsynaptic effects of unsynchronized APs are ineffective or poorly effective. It has traditionally been presumed that sufficiently dense statistics of firings could perform as well, but it is perfectly evident that firing synchronization with the precision of spike duration is incomparably more efficient. It is also quite clear that the activation is mostly effective on target neurons supporting ISOs of the same frequency and phase.
The mechanisms described in the previous sections outline the basic principles of a sort of parallel information processing. Going back to the brainweb hypothesis mentioned in the introduction, the formation of a communication net among different cortical areas is easily explained. In fact, let us assume that some populations of neurons of different brain regions, supporting ISOs of near frequencies and interacting via reciprocating pathways, are preset at a certain moment to a state of ISO synchronization and that, immediately after, one or more subpopulations of these populations are driven by parallel input to a state of synchronized firing. They then immediately start to communicate with each other by synchronized volleys of axon discharges, even through polysynaptic pathways, and, if the populations have been previously trained by similar synchronization episodes to a state of Hebbian synaptic connectivity, they will be recruited to a sort of collective state of resonance, reminiscent of associative memory. This state of mutual communication can be dissolved and the ISOs desynchronized at any moment by fanning out a set of tonic inhibitory stimuli to all ISO-supporting neurons of the net.
Several types of architecture can be imagined to implement various effects—for instance, the generation of more or less complicated sorts of unidirectional or bidirectional communication. In a more general view, we can imagine that an orchestra of synchronization and desynchronization processes, suitably directed by endogenous and exogenous inputs, works as a kind of parallel processing programming.
One of the main problems in the control of ISO synchronization, desynchronization, and firing is the maintenance of ISO generation mechanisms within the limits of optimal functionality, mainly to stabilize ISO frequencies and amplitudes and maintain firing thresholds close enough to the mean level of ISO profile maxima. This problem did not arise in the model, as the maintenance of good working conditions was guaranteed by the mathematical precision of numerical simulations. However, in real systems, it is very difficult to imagine how all these mechanisms could work with sufficient precision in the absence of a suitable system of homeostatic controls.
Previous experience in biophysical modeling [23, 24] made the author aware that the basic functions of any biological system are always governed by a principle of optimal functionality, which is satisfied only at the expense of dozens of homeostatic controls for each optimized function. Some of these controls can be imagined to be performed by intracellular mechanisms—for instance, inward rectifier potassium channels, sophisticated kinetics, and regulations of the sodium and potassium channels involved in ISO generation, feedback mechanisms mediated by cytoplasmic agents, etc.—but others cannot, as certain controlling devices cannot be exposed to the variations of precisely those parameters that need control. The suspicion therefore arises that the network of astrocytes, which locally sense the glutamatergic activity of neurons and feed back locally on the same neurons in still unclear complicated ways , plays a role in controlling the working conditions of neurons, particularly those necessary for optimal ISO functionality.
The importance of the synchronization and desynchronization mechanisms described in this paper also emerges from general considerations regarding the processing capabilities of the brain. The argument stems naturally from the fact that there is enough in theoretical informatics to support the concept of the mammalian brain as a formidable parallel universal machine. Neither the connectionist paradigm nor the dynamical chaos paradigm, both of which have dominated the scene of theoretical neurodynamics in the last few decades, can compete with the explanatory power of the parallel universal computing paradigm.
As a universal machine, the brain should be able to simulate, perhaps in a longer period of time, the computations that can be performed by any another machine, no matter how powerful. The conditions for universality, which were discovered by Alan Turing in , may be summarized as follows: to be universal, a machine must be provided with a reading–writing memory of capacity greater than that of any other computing device; it must be able to work recursively so as to perform, at every recursion, a finite number of logical and arithmetical operations; at every recursion, it must be able to read any desired set of data, to perform computations on these data following the instructions of a finite program, and to write back the results into memory. The relevant property of a universal machine is that, when fed by suitable programs of finite complexity, it can generate processes and data structures of indescribable complexity—which is certainly the case of the brain.
As a massive parallel machine, the brain is expected to be able to perform a number of simultaneous computations per second—much more than any existing sequential machine. To do this to the best of theoretical possibilities, it requires very efficient procedures for process timing and data flow synchronization. Maintaining the temporal coherence of parallel information flows is in fact absolutely necessary for the efficiency of any parallel recursive procedure.
Clearly, therefore, the requirement of universality, together with that of massive parallelism, imposes severe constraints on the mechanism of brain information processing. In particular, the capability of parallel recursion, which is necessary to express computational universality fully, would be impossible if communications among the various components of the brain were carried out by statistical flows of synaptic stimuli. Indeed, the dispersion of time relations among synaptic inputs would make it impossible to preserve the temporal ordering of information flows during recursive procedures. Were it so, the reliable events of the brain, at the end of each neurodynamic process, would only be the formation of static patterns of neuronal excitations or stable limit cycles in reverberating circuits.
Although the possibilities afforded by the mechanisms described here are far from being totally explored, they clearly appear to satisfy the minimal requirements for ensuring computational universality to a machine like the brain.