Search tips
Search criteria 


Logo of cogneurospringer.comThis journalToc AlertsSubmit OnlineOpen Choice
Cogn Neurodyn. 2009 December; 3(4): 429–441.
Published online 2009 September 17. doi:  10.1007/s11571-009-9095-z
PMCID: PMC2777196

Syntactic sequencing in Hebbian cell assemblies


Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.

Keywords: Cell assemblies, Attractor networks, Grammar, Language, Behaviour


Hebbian Cell Assemblies have been proposed as a model for cognitive processes that on one hand aims at functional explanations of psychological phenomena, but on the other hand tries to link them to physiological processes going on in neural circuits in the brain (Hebb 1949). Related studies have mostly modeled aspects of object memory, technically realised by storing patterns in attractor neural networks (Amit 1988; Amit 1995; Palm 1982). Recent experimental findings support the basic framework of assemblies, see, e.g. Harris et al. (2003), Tsien (2001).

More complex spatio-temporal patterns and processes as compared to simple fixed point attractors have also received attention. Lashley (1951) early pointed out the need for neural mechanisms that allow for complex temporal sequencing in neural activity in order to explain behaviour and language. These ideas have been further developed conceptually by Wickelgren and others (Amari 1972; Hebb 1949; von Neumann 1958; Palm 1982; Pulvermüller 1992; Wickelgren 1979, 1992). Central to computational models of sequences are hetero-associative connections between pools of neurons that can evoke a wave-like directed propagation of activity along specific synaptic pathways. In one such type of models, so-called “synfire-chain” models, hetero-associative connections alone suffice to implement sequences. Such studies date back to Abeles (1991) and aim at explaining highly precise temporal patterns in neural firing times (Abeles et al. 1993; Bienenstock 1995; Diesmann et al. 1999; Herrmann et al. 1995; Wennekers 2000).

A second class of sequence models starts from attractor networks where a number of patterns are stored auto-associatively, but additional hetero-associative connections are used to allow for transitions between patterns, see, e.g. Horn and Usher (1989), Palm (1982), Sommer and Wennekers (2005), Willwacher (1982). These models often employ additional mechanisms to destabilise attractor patterns and to evoke autonomous transitions along the hetero-associatively stored sequences of attractors. For example, Horn and Usher (1989) used adapting neurons to cause the transitions. They also provide a mean-field type analysis of such networks with adapting neurons. Rehn and Lansner (2004) present a network where synaptic depression causes pattern transitions. Their model explains participant’s performance in free-recall tasks of ordered and random lists of items.

Although the physiological reality of precise hetero-associative firing patterns and synfire chains is still being discussed, some experiments lend support to the idea of a more or less deterministic directed activation flow through neural circuits as compared to merely stochastic firing sequences (Cossart et al. 2003; Hahnloser et al. 2002; Ikegaya et al. 2004). Specific sequential spatio-temporal firing patterns have especially been observed in the song production system of birds (Hahnloser et al. 2002) where they apparently correspond to representations of syllables and their sequencing into songs (Gentner and Margoliash 2003; Gentner et al. 2006; Jarvis 2004).

It is still an open problem how cell assembly networks can support language and behaviour, both domains requiring the rule-based syntactic concatenation of temporal elements into more complex compounds. The respective mechanisms have to take interactions with the environment into account (or rather with the environment’s representation in the brain). Some relevant physiology-inspired models for language and behaviour have been built, in part based on the concept of “mirror neurons” (Arbib et al. 2000; Arbib 2005; Feldman and Narayanan 2004; Pulvermüller 1992, 2003; Rizzolatti et al. 2002). This includes advanced models in robotics (Knoblauch et al. 2005a; Markert and Palm 2006; Yamashita and Tani 2008).

We have proposed a generic approach addressing mechanisms for complex behaviour and language in cell assembly networks by extending them by rule-based “operational” principles (Wennekers 2006), see also Knoblauch et al. (2005a), Wennekers and Palm (2007). Operational cell assemblies combine attractor networks with rules and finite state automata in a simple and plausible manner by allowing for input-dependent transitions between attractors. This results in an intuitive picture of brain processes as occurring in a state space with a graph-like structure of transitions, where states are attractors of a local neural network dynamics and edges are hetero-associative transitions between attractors that are triggered if certain input events arrive. Such systems would describe processes going on locally in an area or column. The full complexity of cognition would then arise from many such modules interacting. Operational cell assemblies of this kind have been used to implement language and behavioural components on robots (Fay et al. 2005; Knoblauch et al. 2005a, b; Markert and Palm 2006; Markert et al. 2005, 2007). They can also well be implemented in modern neural hardware (Indiveri 2007; Schemmel et al. 2004; Wijekoon and Dudek 2008) and may therefore form a programming paradigm for future computing hardware (Palm 1982; Wennekers 2006).

Our previous work considered only systems, where transitions between assemblies were purely determined by specific input sequences. Either different syntactic input patterns (“words”) were recognised based on the order of arriving “syllables”, or different objects in a visual input uniquely triggered the generation of syntactically structured spatiotemporal outputs (“object naming”) (Wennekers 2006).

In contrast, in the present work we consider the generation of syntactic spatio-temporal output patterns in cell assemblies without specific inputs. This may apply to the generation of free speech, free thinking, or other autonomous behaviours.

One practical difficulty in this context relates to the fact that the speed of speech and behaviour can be modulated in wide ranges. It is not obvious at all how neural circuits can support this. Earlier models for sequencing and timing exist, but the issue is far from being resolved, see Wörgötter and Porr (2005) for a review. Sommer and Wennekers (2005) proposed a switching mechanism to explicitly trigger transitions in a sequence network comprising conductance based neurons. The triggering signal was entirely unspecific with respect to any information stored in the networks but rather aimed at setting the pace of retrieval. We here propose a similar solution in the broader framework of operational cell assembly networks, where syntactic rules are implemented in form of hetero-associative transitions between assemblies as indicated earlier (Wennekers 2006). These can, for instance, form word networks or sentence networks. We then study how unspecific excitatory input can trigger transitions between attractors in such networks. Such a trigger would again not contain any information about the sequences to be generated; rather, it would serve as a threshold control that at some pace evokes transitions along the embedded syntactic pathways. Valentino Braitenberg has suggested such a “pump of thought” (Braitenberg 1978); we here show a possible realisation. The idea of triggered transitions separates the problem of regulating the speed of transitions from the problem of creating the structural properties of possible transitions in the network.

State switching by unspecific excitatory input

In the present section we study the most simple setup of state switching by unspecific excitatory input between latched attractor states. We assume that a single chain of attractor patterns is embedded in a network and linked by hetero-associative connections into a cyclic chain. Transitions between patterns can be caused at (almost) arbitrary times by entirely unspecific inputs that use no extra information beyond that stored in the hetero-associative connections. Analytical conditions for stable switching are derived.

Basic model

We consider a network of n excitatory and n inhibitory graded response neurons. The excitatory neurons are described by low-pass membrane potentials xi with time-constant τ and an intrinsic cell-adaptation ai per cell with time-constant τa that decreases a cell’s excitability during periods of elevated firing. The neurons receive an input Ii and their output is computed by a sigmoid firing rate function as zi = f(xi − ai). A possible choice for f is the unit-step or Heaviside function, f(x) = 1 if x ≥ 0 and f(x) = 0 else, but other sufficiently steep functions work as well. The inhibitory cells (yi) are assumed to be linear for simplicity; they have a membrane time constant τ2 but do not adapt. The dynamic equations of the model read

equation M1
equation M2
equation M3
equation M4

In (1) A = (Aij) and H = (Hij) are auto- and hetero-associative coupling matrices, respectively. D = (Dij) and R = (Rij) in (1) and C = (Cij) in (3) are matrices of random synapses. Their entries are assumed to be independent and identically distributed random variables with means E[R11] = r, E[C11] = c, E[D11] = d and variances D[R11] = σ2r, D[C11] = σ2c and D[C11] = σ2d.

The auto- and hetero-associative connections are set up between neurons in a set of binary memory patters {ξα, α = 1,…,P} where the equation M5 have ones at random positions and zeroes else. We assume that each pattern has exactly m ones. A and H are given by

equation M6
equation M7

In (6) and elsewhere we implicitly assume that the hetero-associative chain of patterns is cyclically closed, i.e. ξP+1 = ξ1. Parameters a and h set the relative strength of the auto- and hetero-associative connections. Note that here and throughout the paper upper Greek indexes denote patterns (1,…,P) and lower Roman indexes units (1,…,N). Where they are obvious we often don’t provide summation ranges in sums.

We define the overlap matrix between patterns as equation M8 Because the patterns are random and each has m ones we have equation M9 The second term reflects overlaps between patterns and can be made small by using sparse patterns with equation M10

Derivation of meanfield equations

We further define the following pattern specific averages of the model variables

equation M11
equation M12
equation M13
equation M14

The mα are often called “overlaps” because they measure how close the firing pattern z is to a stored pattern ξα; mα will take its maximum value of 1 only if all ones in ξα are also fully active in z.

We now derive dynamic equations for these averaged variables from the original (1)–(4). For this we use the approximation

equation M15

Equation 11 states that the individual neuron activations can be written as a superposition of the memory patterns weighted by the current activation strength mβ of each pattern. This approximation reflects that the dynamics of the system will either stay in fully excited memory states (auto-associative mode) or move between such patterns (switched mode).

For the pattern specific adaptation we then get

equation M16

If we define the total activity as s: = ∑βmβ the inhibitory dynamics results in

equation M17
equation M18

Note that the right hand side in (14) defines the same low-pass dynamics for all α. Therefore, the pattern-specific inhibition variables all become similar asymptotically in time after initial transients have died out. (They may even become identical if we would further assume Cij = c = constant or in the asymptotic limit of infinite pattern size, m→∞.) We could therefore just have started with a single globally acting linear inhibitory neuron. This reflects that the main purpose of the inhibitory neurons is to measure the total activity of the excitatory cells and provide a proportional dynamic threshold control.

The averaged excitatory potentials become

equation M19
equation M20
equation M21
equation M22
equation M23

Now, remember that equation M24 where equation M25 Inserting this into (19) leads to

equation M26
equation M27
equation M28
equation M29

Equation 23 shows that the pattern averaged potentials will follow a low-pass dynamics. Iα is the averaged input and duα the effect of the inhibitory pool of neurons. Clearly, pattern α is only influenced by effective auto-associative connections amα from neurons within the pattern and hetero-associative connections from patterns in the previous pattern, hmα-1. The expressions rs and equation M30 in (24) are noise due to random synapses between excitatory cells and cross-talk noise between patterns, respectively. They are both proportional to s and should be small. Because a, h, and r are all positive we may integrate their effect into a single constant rr + 2(a + h) that accounts for the maximum impact of the noise sources.

equation M31

Note that uα by means of (14) is a low-passed version of the total activity s as well. It should therefore be possible to balance the mean impact of the noise (rs) by a proper choice of the inhibitory coupling strength d.

Analysis of triggered transitions

We assume that a sequence of patterns is stored in the synaptic connections. For proper parameters the model supports stable auto-associative retrieval of individual patterns, meaning that after its initial excitation and without further input the network activity stays in an attractor corresponding with the retrieved pattern. Such a state is characterised by one of the overlaps mα being close to one and all others being small, i.e. of the order of the mean overlap between patterns.

The activated pattern, say pattern number α, feeds into the next pattern, α + 1, by means of the hetero-associative synaptic connections in matrix H. If the whole network is in an attractor state this input should not raise the neurons in α + 1 above threshold.

We study if it is possible to reliably switch to the next pattern by means of an unspecific input pulse into all excitatory neurons. The hetero-associative input into α + 1 together with that extra input may then switch α + 1 on. This in turn would increase the level of inhibition which introduces competition between activated patterns. Because pattern α does not see hetero-associative input as α + 1 does and its cells may furthermore be in an adapted state, the activation of α + 1 can subsequently lead to an inactivation of α. This way unspecific excitatory pulses into the network may be used to trigger specific transitions imprinted in the hetero-associative connections. We study unspecific triggering as a worst case scenario—it would of course be possible to specifically trigger transitions to the next pattern by activating it directly and inhibiting all others. This, however, would require the triggering control network to already possess the information in the stored pattern sequence that we want to retrieve.

We observe the population activity in three subsequent populations ξα →ξα+1 → ξα+2 during the transition from ξα to ξα+1 and distinguish three phases:

  • Phase 1: ξα stays active; ξα+1 remains inactive without external input (attractor state)
  • Phase 2: ξα stays active; ξα+1 grows with external input I into all cells
  • Phase 3: ξα deactivates; ξα+1 is completed and stays activated without external input I

In all three phases ξα+2 and other populations beside ξα and ξα+1 should not get activated.

Collecting the simplified equations for the pattern specific activities derived in “Derivation of meanfield equations” we have for any pattern α:

equation M32
equation M33
equation M34
equation M35

The neurons are modelled with a sigmoid rate function f which increases steeply from 0 to a constant value equation M36 beyond a threshold [theta]. Without loss of generality we assume equation M37 One choice of rate function is the threshold- or step-function with threshold [theta]. We also set d = 1.

Phase 1 (attractor state) During this phase activation remains stationary such that hα is above threshold but hα+1 and all other potentials stay below. Accordingly, mα is close to one and all other overlaps are close to zero. Only patterns α and α + 1 receive specific synaptic input via auto- or hetero-associative connections from neurons in the active pattern α. Neurons in other patterns only receive random input from α via random or cross-talk synapses. There is no unspecific external input, I0 = 0.

Stationarity implies for pattern α

equation M38
equation M39
equation M40

In order to stay active hα > [theta] + wα = [theta] + b is required which implies

equation M41

According to the discussion after (23) the parameter r accounts for random synapses between cells as well as cross-talk between patterns. These are random effects that can vary between zero and some positive value characterised by r. To stay on the safe side and ensure stable attractors even for those patterns where r is vanishing, we assume instead of (32) the stronger condition

equation M42

The case r = 0 is indeed relevant if orthogonal patterns and no additional random synapses are assumed.

For pattern α + 1 we get 0 = −hα+1 + r − c + h and 0 = −wα+1. Pattern α + 1 should not get activated such that hα+1 < [theta] is required. This implies

equation M43

Pattern α + 1 is the only one that receives hetero-associative input from α. Therefore, if it stays stable, no other pattern can get activated either (beside the already active one, α).

Phase 2 During this phase the untuned input into all neurons is activated such that pattern α stays on but pattern α + 1 increases its potentials hα+1 close to a new equilibrium above [theta]. Simultaneously mα+1 increases from 0 to 1. The average adaptation wα+1 of neurons in α + 1 will also slowly increase towards its maximal value b. During this phase we further require that α + 2 and any other pattern remain below threshold. The stimulus is switched on for a time of the order of the larger of the membrane time-constants (τ,τ2) and the adaptation constant (τa) to allow for these changes of variables to take place.

For simplicity we assume that equation M44 meaning that the inhibition follows excitation quasi instantaneously, uα = cs for all α. Because mα is requested to stay (near) one we have for patterns α to α + 2

equation M45
equation M46
equation M47

For a constant mα+1 the asymptotic potentials approached would be

equation M48
equation M49
equation M50

We require patterns α and α + 1 to be super-threshold, but α + 2 to stay below. This is requested during the whole phase 2 where mα+1 grows from 0 to 1. More precisely, consider pattern α + 1: At the beginning of phase 2, hα+1 is small and certainly smaller than the right hand side in (36). It will therefore grow towards the asymptotic value in (39) as long as the right hand side in (36) stays positive due to the unspecific input. We need to guarantee that the right hand side is positive for all equation M51 By means of linearity it is sufficient to check this only at mα+1 = 0 and 1, namely at the beginning and the end of phase 2, respectively. Analogous arguments for pools α and α + 2 lead to the following inequalities:

equation M52
equation M53
equation M54
equation M55
equation M56
equation M57

In (43) δ is some slack variable that reflects the possibility that the adaptation of α + 1 is not entirely relaxed towards zero at the beginning of phase 2. This would mean a higher input is required to overcome the firing thresholds. It is possible to just consider the worst case δ = b and also to again assume r = 0 in the first four inequalities. This makes (41) and (44) redundant relative to (42) and we keep

equation M58
equation M59
equation M60
equation M61

Phase 3 During this phase the external input is switched off again. At the beginning of phase 3 pattern α + 1 sees auto-associative input from itself and hetero-associative input from pattern α; pattern α sees auto-associative input from itself but no hetero-associative input; pattern α + 2 sees hetero-associative input from pattern α + 1 only; all other patterns see at best random or cross-talk input. Therefore, pattern α + 1 has the best chance to survive the competition induced by inhibition. All patterns receive inhibition, but in contrast to phase 1 this now stems from two activated patterns, α and α + 1, and can therefore deactivate pattern α if it is strong enough. High adaptation in pattern α may in addition support this pattern to die out. The average adaptation wα of neurons in α will then decay to zero during phase 3 and the adaptation of wα+1 will approach and stay around the equilibrium value b. Thus at the end of phase 3 we are again in the situation of phase 1 with pattern α + 1 activated instead of α.

We thus assume that essentially only pattern α will change its activation during phase 3. hα will move to a new equilibrium below [theta] and mα will decrease from one to close to zero. Therefore:

equation M62
equation M63
equation M64

For mα = 0 at the end of phase 3 we recover the same inequalities as in phase 1 because the situation is the same with α + 1 activated stably instead of α. At the beginning of phase 3 we obtain (using additional worst case assumptions as earlier)

equation M65
equation M66
equation M67

Inequality (56) is redundant because (50) is more restrictive. This leaves us with eight inequalities for the model parameters: (33), (34), (47)–(50), (54), and (55). These are conditions on the (non-negative) parameters a, h, b, c, r, I0, [theta] that allow for stable switching between attractors.

equation M68
equation M69
equation M70
equation M71

Observe that if we find a set of parameters with b > 0 which satisfies these conditions, the same set also satisfies them for all smaller b. This includes b = 0 such that adaptation is apparently not necessary for triggering transitions. However, the above inequalities are not the most general—in some parameter regimes not covered by them adaptation of neurons in the currently active pattern may ease the transition towards the next un-adapted and therefore more excitable pattern. Suppression of the current pattern via inhibitory competition will be easier in this case.

Moreover, for any valid parameter set with r ≠ 0 the same inequalities stay valid for all smaller r. We may therefore continue the analysis by studying the case r = b = 0. If we find parameters that satisfy the above inequalities under this assumption, any margin can be used to choose maximal b and r afterwards independently. For the same reason we can assume [theta] = 0.

We therefore make the assumptions r = b = [theta] = 0 and a = 1, and introduce new parameters μ, κ, ζ by means of h = μa, c = κa, I0 = ζa. The inequalities (57)–(60) then become

equation M72
equation M73
equation M74
equation M75

Equation 61 immediately implies 1/2 < κ < 1 and (62) and (63) imply μ < κ and ζ < κ, respectively. The right inequality in (64) is then redundant but the left requires μ + ζ > κ or μ > κ − ζ and ζ > κ − μ. If we use the latter two conditions in (62) and (63) these inequalities reduce to 1 − ζ − κ > 0 and 1 − μ − κ > 0 which is the same as ζ < 1 − κ and μ < 1 − κ. Because κ is already constrained in the range 1/2 < κ < 1 we get ζ < 1 − κ < κ and μ < 1 − κ < κ such that the earlier derived conditions μ < κ and ζ < κ are obsolete. This finally leaves us with

equation M76

In words: For a vanishing threshold and an arbitrary but fixed auto-associative coupling constant, inhibition has to be relatively strong between 50 and 100% of the auto-associative coupling. The input and hetero-association strengths individually may only be as strong as up to 50% of the auto-associative coupling, but input and hetero-association together have to be stronger than the inhibition.

Examples for triggered and autonomous transitions

To satisfy (65) we may choose μ = ζ which results in κ/2 < μ = ζ < 1−κ in addition to the earlier 1/2 < κ < 1. This allows for solutions in the range 1/2 < κ < 2/3, for instance, κ = 0.6, μ = ζ = 0.35. Given these values we can check the margins in inequalities (57)–(60), which we can then use to calculate maximal b and r values (as fractions of a = 1) assuming vanishing thresholds, [theta] = 0. We find as maximal values bmax = rmax = 0.1. Alternatively we can split the margin for b into a fixed positive threshold 0 < [theta] < bmax and a new upper bound for the maximal adaptation strength, e.g. [theta] = 0.05 and b = 0.05. Positive firing thresholds have the advantage that cells do not autonomously activate if there is no input and no already active pattern that can cause inhibitory competition in order to avoid firing of unwanted neurons.

Figures 1 and and22 display example simulations. The simulations implement (1)–(6) for n = 36 excitatory neurons, and P = 6 non-overlapping patterns with m = 6 active units each. The rate function of the excitatory neurons is the Heaviside function (unit step). R is the zero matrix. There is only a single inhibitory neuron; synapses from excitatory cells to the inhibitory cell have identical values C1j = c; synapses from the inhibitory cell to the excitatory cells are all 1. The latter is possible without loss of generality because the inhibition is purely linear; it is therefore enough if c can be varied. The differential equations are solved by the Euler method (Press et al. 1993) with stepsize 0.1. Parameter values were τ = τ2 = τa = 1, r = 0, b = 0.05, [theta] = 0.05,a =  1.0, μ = 0.35, ζ = 0.35, κ = 0.6.

Fig. 1
Transitions between patterns for various durations of an unspecific trigger. Trace 7 shows the ‘trigger’ signal; small numbers 1,2,3 on top indicate the three phases of a transition defined in “Analysis of triggered transitions” ...
Fig. 2
Autonomous propagation of activity if the unspecific input is persistently high (trace 7, ‘trigger’). The level of adaptation (model parameter b) increases during the simulation in several steps from 0.05 to 0.55 (trace 0). Trace-1 displays ...

Figure 1 shows switching between patterns given unspecific triggers of various duration. Note that we have chosen all time constants identical, τ = τ2 = τa = 1, although in “Analysis of triggered transitions” we assumed that inhibition is fast in order to proof the stability conditions provided there. However, simulations show that this assumption is not very crucial. Even with identical time-constants as used here triggering works well.

In Fig. 1 the first pattern is excited by an excitatory specific input into all neurons of pattern 1 during time 3.0–4.0 (see trace 0). Note that this activates pattern 1 (trace 1) but also already pattern 2 (trace 2) via the hetero-associative couplings from pattern 1 to 2. Pattern 2 can fire because the inhibition (trace-1) starts from 0 and has to build up first in order to suppress patterns that only get hetero-associative input. This initialisation step is not covered by the stability considerations in “Analysis of triggered transitions”. Firing of pattern 2 could, for instance, be avoided by initialising the inhibition with an appropriate positive value or by providing excitatory input not only into the first pattern but also the inhibitory unit during the primary excitation of the starting pattern.

Observe that each triggering phase causes synchronous fluctuations in the potentials of all excitatory cells regardless of which pattern they belong to (smooth curves in traces 1–6). This reflects the unspecific nature of the trigger. A positive (excitatory) response is usually followed by an inhibitory slightly later component caused by the next pattern getting excited and thus suppressing its competitors. Neurons in successive patterns are super-threshold such that the respective overlaps are large. Because we use step-functions as rate functions of the excitatory neurons the overlaps switch in an almost rectangular manner between 0 and 1 (traces 1–6).

Small numbers on top of Fig. 1 reflect the three different phases of a transition as defined in “Analysis of triggered transitions”. During phases 1 only a single pattern is activated and stays roughly in equilibrium. During phase 2 the unspecific input raises the subsequent pattern above firing threshold; other patterns do not fire because they do not get hetero-associative input. If two patterns are activated during phase 2 inhibition rises proportionally. This increased inhibition switches the previously active pattern off in phase 3 because that pattern only receives auto-associative input whereas its successor receives hetero- and auto-associative input and therefore stays above threshold. Afterwards signals relax back to equilibrium values until the next transition is initiated.

The duration of a triggering phase 2 can be arbitrarily long as long as it exceeds a certain minimum length. This basically was assumed in the stability considerations in “Analysis of triggered transitions” where the only restriction on the duration of phase 2 was that it is long enough to allow for the transition to take place. In Fig. 1 the durations start much longer than the neural time-constants (τ = τ2 = τa = 1) but get successively shorter. As long as the unspecific input is present for long enough a time to raise the subsequent pattern above threshold the retrieval works well; if it gets too short retrieval fails. This happens around time 50 where the trigger is present only for 2 units of time and the network stays in attractor 5. Finally note that the sequence of stored patterns in the simulation is cyclic. At time 65 attractor 1 is reached again.

Figure 2 displays a simulation with adaptation too strong to guarantee stable switching. In this simulation the unspecific input is permanently on (trace 7). The figure shows, that above a certain value of the adaptation strength (model parameter b) autonomous propagation of activity through the sequence of attractors results. As the adaptation gets stronger, the times spent in any single attractor gets shorter, and therefore the retrieval of the pattern sequence speeds up. A similar effect can also be reached by increasing the hetero-associative coupling strength, but not, for example, by changes in the level of unspecific input (not shown). At high adaptation, retrieval fails with more complex firing patterns emerging than linear sequences of states (t > 85 in the figure).

Activity starts propagating through the combined action of adaptation and inhibition. During phase 2 of a retrieval cycle, characterised by the presence of the unspecific triggering stimulus, two patterns are ideally active, the current and the previous one. If the adaptation strength is too high it switches the previous pattern off because its auto-associative self-excitation together with the unspecific input will no longer be strong enough to overcome the threshold plus adaptation minus the inhibition. As a consequence the inhibition is released such that the next pattern can become super-threshold, it fires, and after a brief relaxation time the system reaches the initial state again just one step forward in the sequence.

For increasing adaptation strength the adaptation times get shorter because threshold conditions are reached quicker. The inhibition can then no longer follow the excitatory activity changes fully, such that its amplitude changes decrease (trace-1 in Fig. 2). This diminishing inhibition causes an instability to complex firing patterns at high adaptation strength where the inhibition can no longer suppress the firing of more than one or two patterns only (t > 85 in the figure).

In summary the present section has presented simulations of proper retrieval of pattern sequences by unspecific external triggers consistent with the analysis in “Analysis of triggered transitions”. Such simulations reveal that the stability conditions derived in “Analysis of triggered transitions” are actually quite robust. Nonetheless, if the conditions are broken, especially for too strong adaptation or hetero-association, more complex activation patterns can arise including an autonomous mode of retrieval of a stored sequence. These modes will be discussed in the next section in relation to generators and recognisers of more complex, syntactical time patterns than linear sequences.

Context and graph-like transition patterns

In traditional attractor networks static patterns are stored in auto-associative connections; they can be retrieved from partial or noisy cues (Hopfield 1982; Willshaw et al. 1969) by the recurrent network dynamics. Hetero-associative connections as shown in the previous section can further be used to extend the possible dynamic modes towards sequences (Abeles 1991; Horn and Usher 1989) thereby introducing (temporal) order. In an autonomously evolving setup they also provide a notion of “time”. Such network properties may underlie episodic memory and time perception (Wennekers 2006). Still, linear sequences are not yet sufficiently powerful models of spatio-temporal pattern processing appearing in real-world situations. Those usually reveal syntactic structure, for instance, describable by some kind of grammar.

We have earlier pointed out how attractor networks can be extended in a natural way towards the processing of syntactic structures (Wennekers 2006; Wennekers and Palm 2007; Wennekers et al. 2006). Keys to this approach are, first, the introduction of several possible target states for each attractor coded in hetero-associative couplings, and second, specific external inputs, that disambiguate which transition is selected if there are several for a given attractor state. These extension where partly motivated by von Neumann’s and Wickelgren’s earlier works, see, e.g. (von Neumann 1958; Wickelgren 1979; Wickelgren 1992).

This extension of attractor networks leads from pure attractors via linear sequences [also called “synfire chains” (Abeles 1991)] to labelled graphs (“synfire graphs”) representing complex, input- and context-driven patterns of activation flow in a generalised attractor system. Nodes in these graphs are the stored patterns (attractors) and the edge-labels represent the specific external inputs that cause a particular transition between patterns. This generic image indeed describes nothing but the state transition graph of some finite state automaton (FSA) (Hopcroft and Ullman 1969). A simple and plausible extension thus allows for a significant boost in computational capabilities of attractor networks. We have called the general framework “operational cell assemblies” because it concretises how classical Hebbian cell assemblies can systematically support procedural and rule-based processing, see Wennekers (2006). Clearly, the different dynamic modes—fixed point attractors, linear flow of activity (sequences), and input dependent gating of activation flow in a graph-like structure (syntax)—can all be integrated in the same neuronal network; they just represent three different principles in an isolated and abstract way that can be studied independently.

In Wennekers (2006) the recognition of spatio-temporal syntactic patterns was studied as well as their autonomous generation. These simulations made use of “full disambiguation” of transitions by external inputs: If in any state several transitions were possible a specific input external to the network provided a bias that determined the next state uniquely (together with the state information). If there was no such input or if it did not fully determine the next state, the network behaviour was undefined. This is equivalent to the behaviour of a finite state automaton and thereby also a modern computer—ambiguous states do not occur.

Processes in the brain may well be different. However, an unspecific trigger causing the network to compute as in “Analysis of triggered transitions” assumes that the next target is unambiguous. This is true for the linear sequences used in “Analysis of triggered transitions” and it was also true in Wennekers (2006) where specific inputs disambiguated target states in a word recogniser. In a more general setup there may be uncertainties about the next network state, especially where syntactic patterns are generated and not recognised. Whereas external input can disambiguate activity flow in the second case, other mechanisms are necessary in the case of internal generation.

In neural structures disambiguation can have various causes not always clearly separable from each other. One possibility is specific input into an operational associative module as already mentioned. A second one is general context that remains stable for a certain time and constrains or biases some of the possible transitions in a network (e.g. Wickelgren 1979; Wickelgren 1992). Instead of a single fixed transition graph this results in context-dependend mappings implemented in a recurrent operational cell assembly network. Such contextual selection may underlie rule- and task-set maps in frontal cortical areas and the switching between them (Dosenbach et al. 2006; Koechlin and Jubault 2006; Mushiake et al. 2006; Rushworth et al. 2004; Stoet and Snyder 2003).

It is also useful to consider that a local network will usually be embedded in a larger-scale super-network of operational modules. This situation reflects that cortical columns and areas are quite extensively connected with each other. Processes going on in the different modules mutually provide input to each other, which again may disambiguate local processes. Finally, there is certainly also the impact of noise as a way to choose between different possible targets. This requires in addition some kind of winner-take-all mechanism because a target, once chosen, needs to stabilise itself against the firing of other targets. The next section demonstrates disambiguation by noise in a simple example setup.

Example for random transitions in a network

Figure 3 shows an example of a network with random transitions. The model is the same as in “Examples for triggered and autonomous transitions” including parameter values, but the neurons now receive independent Gaussian white noise inputs that allows for random transitions in case of ambiguities. In addition to the cyclic sequence of patterns 1–6 an additional hetero-associative transition has been added from pattern 5 to 3. All hetero-associative weights are the same, therefore the probability for transitions from pattern 5 to either 6 or 3 should be about equal (neglecting effects of adaptation that may favour one or the other transition). Pattern 1 is externally activated at time zero. The unspecific input is permanently on which causes an autonomous network behaviour. As can be seen the network first cycles two rounds through the “regular” cyclic sequence and then does another two transitions along the second pathway (arrows).

Fig. 3
Example for random transitions in autonomous retrieval mode. The same network and parameters as in Fig. 2 (for b = 0.05) are used but membranes receive noise input and beside the cyclic stored sequence an extra transition from ...

This simple example is shown here for demonstration purposes. It should be clear that quite arbitrary transition structures can be implemented and that by choosing different synaptic weights hetero-associative transitions can be given different probabilities to be taken. Thereby a wide variety of Markov chains can be implemented. (Although not all possible ones, unless additional assumptions are made; the finite system noise gives every transition a finite chance to occur, even those that are not imprinted in the hetero-associative connections. Dynamic network properties may pose additional constraints—long-lasting adaptation, for instance, will induce temporal correlations beyond those described by Markov chains.)

The setup can apparently also be combined with the mechanisms described earlier: Specific external input can provide context that disambiguates situations by making some transitions more or less likely. An unspecific external input can still act as a trigger: The simulation in Fig. 3 shows the autonomous generation of a syntactic time-pattern, but the same network works also in a triggered mode as in Fig. 1 (not shown).


In summary we have shown that syntactic patterns can be represented in operational cell assembly networks comprising attractor states, input dependend transitions between states, and an unspecific activation control that triggers actual transitions unless they evolve in an autonomous mode. Noise in the system can be used to disambiguate possible target states and give them different transition probabilities as reflected by different strengths of hetero-associative connections. Disambiguation can further be realised by contextual signals, either spatially or temporally [as for instance already suggested by Wickelgren (1979)]. In an abstract sense, such networks may be seen as just implementing some kind of Markov chains in neural hardware. However, by flexibly combining the various features mentioned with the well-known classical memory properties of attractor networks complex computational structures can result that exceed the usual world of Markov chains by far.

We here have provided parameter regimes for a proper retrieval of syntactic sequences using unspecific excitatory triggers and have shown simulations of autonomous and non-autonomous modes of retrieval. These parameter constraints should enable an implementation of temporal pattern generators and recognisers in software simulations and potentially also in neural hardware (Indiveri 2007; Schemmel et al. 2004; Wennekers 2006).

In the present work we have used simplistic graded response neurons. The dynamics of spiking neuron networks can often be reduced to “mean-field” equations of similar structure (Coolen 2001a, b; Eggert and van Hemmen 2000; Horn and Usher 1989). Single units in our model may be interpreted as representing pools of spiking neurons in an asynchronous regime. In Sommer and Wennekers (2005) we have shown that triggered transitions can also be caused in networks of conductance based neurons, although the results presented here do not generalise to this work because of a different network structure.

We only considered linear and unspecific inhibition, i.e. the inhibitory neurons did not store information about the pattern sequences. The latter reflects that mainly only synapses between excitatory neurons are engaged in synaptic learning. The linearity assumption is made to simplify the analysis in the main text. It is not entirely crucial, but if relaxed, many of the conditions constraining the model parameters become nonlinear equations that can not be further analysed by hand. Linear inhibition informally means that in stationary states the total amount of inhibition is proportional to that of excitation, which is not unreasonable as some author’s even make the stronger claim that excitation and inhibition are balanced in physiological states (Haider et al. 2006; van Vreeswijk 1996). The inhibition is again assumed to reflect the average activity level of inhibitory neurons. Although individual neurons may have non-linear rate functions those may average to a close(r) to linear relationship in the mean.

The general framework of operational cell assemblies can be applied to model rule-based temporal behaviour. A primary target would be a model for syntactic aspects of language on its different hierarchical levels of phonemes, syllables, words, and sentences, because the use of attractor states allows for a quite arbitrary timing of triggers. Trigger signals can come at any time if the stability of attractors that hold information about the current state of parsing is guaranteed. The inflow of acoustic information can thus happen at flexible speed. Experimental results by Koechlin and Jubault (2006) may be interpreted as evidence for triggered transitions. This work suggests the existence of a hierarchy of behavioural/language sequences in Broca’s area and it’s right homologue, as well as trigger-like signals occurring at segment boundaries at the various hierarchical levels. Strictly speaking this work does not show that the triggering signals are unspecific. However, our assumption of unspecific triggers is an extreme case; specifity would ease the sequencing problem by potentially providing information about possible target states.

On the shortest time-scale of phonemes and syllables, other than triggered transitions may be of importance, see e.g. Buonomano (2003), Hahnloser et al. (2002). On that level the flow of incoming information may cause a more continuous flow of activity through the neural recogniser circuits without discrete triggers. Such circuits may be more akin to “synfire chains” (Abeles 1991; Hahnloser et al. 2002; Wennekers 2000) or the untriggered autonomous mode of operation displayed in Figs. 2 and and3.3. Clearly, any kind of mixture with some transitions triggered, others autonomously evolving is possible.

Previous sequencing models have often focused on autonomous retrieval. They usually require some mechanism for automatically destabilising attractors. Several such mechanisms have been proposed like adaptation, depressing synapses, delayed hetero-associative connections, and others (Abeles 1991; Hertz et al. 1991; Horn and Usher 1989; Rehn and Lansner 2004; Russo et al. 2008). Our main interest here was mainly on non-autonomous triggered transitions. Few works have considered those before, but see Sommer and Wennekers (2005), Wennekers and Palm (2007). Our analyses show that adaptation is not necessary to stably trigger transitions; however, as the simulations in Sect. “Examples for triggered and autonomous transitions” show that it can cause autonomous transitions compatible with the mechanisms in Horn and Usher (1989). It is possible to include effects of synaptic depression and facilitation in our analysis by assuming that the synaptic strengths approach stationary values during the three phases defined in Sect. “Analysis of triggered transitions”. In that case the parameters “a” and/or “h” for the auto- and hetero-association strengths in the stability inequalities have to be replaced by their adapted or maximum values, depending on whether a target pattern is requested to fire or stay below threshold, respectively. The form of the conditions remains similar as before, being slightly more restrictive. Stable triggering can be obtained at least for facilitating synapses or not too strongly depressing ones.

Our approach of triggered transitions precisely aims at implementing rule-like operations in cell assembly networks. In a linguistic context these rules may refer to entities at different levels, low-level phonetic features or higher level syntactic categories. Different levels can be combined into hierarchies such that linguistic knowledge can be represented efficiently [perhaps reflected by the experiments in Koechlin and Jubault (2006)]. We have worked out the main ideas in Wennekers (2006), Wennekers and Palm (2007). The framework extends attractor networks from fixed point retrieval and linear sequences towards context-dependent rules. In these previous works, however, spatio-temporal “grammatical” sequences where recognised or generated such that the necessary state transitions either occurred fully autonomously or were driven by specific input events. The triggering proposed in the present work adds an additional principle of unspecific control.

An important question refers to memory capacities of associative networks. A common result is that under optimal conditions associative networks are quite efficient memories able to store a significant fraction of a bit per (binary) synapse (Coolen 2001a, b; Palm and Sommer 1992). Simulations using spiking neuron networks get close to the theoretical limits as shown in Rehn and Lansner (2004), Sommer and Wennekers (2001). The network studied in the present work assumes some stability margin expressed by positive threshold distances in the stability conditions. It will therefore not reach the absolute theoretical capacity limits, but as long as the margins are not too big, one would still expect an extensive memory capacity. Given the huge number of synapses in the cortex, a large number of associations can certainly be stored.

We should finally note that we haven’t touched on the difficult problem of how to learn the type of model described here in a self-organising way, that is, how such structures can emerge as a consequence of stimulus-driven synaptic plasticity. Machine learning techniques can be used to learn some grammars in artificial neural networks, but they are often not too plausible biologically (Sun and Giles 2000; Wörgötter and Porr 2005). The problem has been approached recently in more realistic settings, but with no answer yet: some of the proposed systems appear quite engineered (Knoblauch et al. 2005a; Markert et al. 2008), others with more generic architectures are not yet able to learn more than the simplest grammatical temporal patterns (Garagnani et al. 2008). We have earlier pointed out that grammar and rule-like behaviour may in principle result from a timing dependent synaptic learning rule that maximises a temporal generalisation of mutual information (Wennekers and Ay 2005). However, this result was derived in the quite abstract framework of Markov chains and needs further exploration in realistic contexts.


The authors thank the anonymous reviewers for their extended and constructive comments that helped to improve the manuscript significantly. The main content of this paper has been prepared while G.P. was visiting researcher at the Centre for Theoretical and Computational Neuroscience in Plymouth U.K. This visit was generously funded by an Incoming Visitors Grant to T.W. from the Royal Society. Support by the Engineering and Physical Sciences Research Council to T.W. is also greatly acknowledged (grant EP/C010841/1, COLAMN-A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex).

Contributor Information

Thomas Wennekers, Phone: +44-1752-233593, Fax: +44-1752-233349,

Günther Palm, ed.mlu-inu@mlap.rehtneug.


  • Abeles M (1991) Corticonics: neural circuits of the cerebral cortex. Cambridge University Press, Cambridge

  • Abeles M, Bergman H, Margalit E, Vaadia E (1993) Spatio-temporal firing patterns in frontal cortex of behaving mokeys. J Neurophysiol 70:1629–1643 [PubMed]

  • Amari SI (1972) Learning patterns and pattern sequences by self-organizing nets of threshold elements. IEEE Trans Comput 21:1197–1206

  • Amit D (1988) Modeling brain function. Cambridge University Press, Cambridge

  • Amit D (1995) The Hebbian paradigm reintegrated: local reverberations as internal representations. Behav Brain Sci 18:617–657

  • Arbib MA (2005) From monkey-like action recognition to human language: an evolutionary framework for neurolinguistics. Behav Brain Sci 28:105–167 [PubMed]

  • Arbib M, Billard A, Iacoboni M, Oztop E (2000) Synthetic brain imaging: grasping, mirror neurons and imitation. Neural Netw 13:975–997 [PubMed]

  • Bienenstock E (1995) A model of the neocortex. Network 6:179–224

  • Braitenberg V (1978) Cell assemblies in the cerebral cortex. In: Heim R, Palm G (eds) Theoretical approaches to complex systems. Springer, Berlin, pp 171–188

  • Buonomano DV (2003) Timing of neural responses in cortical organotypical slices. Proc Natl Acad Sci (USA) 100:4897–4902 [PubMed]

  • Coolen A (2001a) Statistical mechanics of recurrent neural networks i-statics. In: Moss F, Gielen S (eds) Handbook of biological physics, chap 14. 4th edn, Elsevier, Amsterdam, pp 531–596

  • Coolen A (2001b) Statistical mechanics of recurrent neural networks ii—dynamics. In: Moss F, Gielen S (eds) Handbook of biological physics, chap 15. 4th edn, Elsevier, Amsterdam, pp 597–662

  • Cossart R, Anonov D, Yuste R (2003) Attractor dynamics of network UP states in the neocortex. Nature 423:283–288 [PubMed]

  • Diesmann M, Gewaltig MO, Aertsen A (1999) Stable propagation of synchronous spiking in cortical neural networks. Nature 402:529–533 [PubMed]

  • Dosenbach NU, Visscher KM, Palmer ED, Miezin FM, Wenger KK, Kang HC, Burgund ED, Grimes AL, Schlaggar VL, Petersen SE (2006) A core system for the implementation of task sets. Neuron 50:799–812 [PMC free article] [PubMed]

  • Eggert J, van Hemmen J (2000) Unifying framework for neuronal assembly dynamics. Phys Rev E 61:1855–1874 [PubMed]

  • Fay R, Kaufmann U, Knoblauch A, Markert H, Palm G (2005) Combining visual attention, object recognition and associative information processing in a neurobotic system. In: Wermter S, Palm G, Elshaw M (eds) Biomimetic neural learning for intelligent robots, LNAI, vol 3575. Springer, Berlin, pp 118–143

  • Feldman J, Narayanan S (2004) Embodied meaning in a neural theory of language. Brain Lang 89:385–392 [PubMed]

  • Garagnani M, Wennekers T, Pulvermüller F (2008) A neuroanatomically grounded Hebbian leaning model of attention-language interactions in the brain. Eur J Neurosci 27:492–513 [PMC free article] [PubMed]

  • Gentner TQ, Margoliash D (2003) Neuronal populations and single cells representing learned auditory objects. Nature 424:669–674 [PMC free article] [PubMed]

  • Gentner TQ, Fenn KM, Margoliash D, Nusbaum HC (2006) Recursive syntactic pattern learning by songbirds. Nature 440:1204–1207 [PMC free article] [PubMed]

  • Hahnloser R, Kozhevnikov AA, Fee MS (2002) An ultra-sparse code underlies the generation of neural sequences in a songbird. Nature 419:65–70 [PubMed]

  • Haider B, Duque A, Hasenstaub AR, McCormick DA (2006) Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition. J Neurosci 26:4535–4545 [PubMed]

  • Harris KD, Csicsvari J, Hirase H, Dragoi G, Buzsaki G (2003) Organization of cell assemblies in the hippocampus. Nature 424:552–555 [PubMed]

  • Hebb D (1949) The organization of behavior. Wiley, New York [PubMed]

  • Herrmann M, Hertz J, Prugel-Bennet A (1995) Analysis of synfire chains. Network 6:403–414

  • Hertz J, Krogh A, Palmer R (1991) Introduction to the theory of neural computation. Addison-Wesley, Reading

  • Hopcroft J, Ullman J (1969) Formal languages and their relation to automata. Addison-Wesley, Reading

  • Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. PNAS 79:2554–2558 [PubMed]

  • Horn D, Usher M (1989) Neural networks with dynamical thresholds. Phys Rev A 40:1036–1044 [PubMed]

  • Ikegaya Y, Aaron G, Cossart R, Aronov D, Lampl I, Ferster D, Yuste R (2004) Synfire chains and cortical songs: temporal modules of cortical activity. Science 304:559–564 [PubMed]

  • Indiveri G (2007) Synaptic plasticity and spike-based computation in VLSI networks of integrate-and-fire neurons. Neural Inf Process Lett Rev 11:135–146

  • Jarvis ED (2004) Learned birdsong and the neurobiology of language. Ann NY Acad Sci 1016:749–777 [PMC free article] [PubMed]
  • Knoblauch A, Markert H, G Palm G (2005a) An associative cortical model of language understanding and action planning. In: Mira J, Alvarez JR (eds) Proceedings of IWINAC 2005, 1st international work-conference on the interplay between natural and artificial computation, Las Palmas de Gran Canaria, Spain, Lecture Notes in Computer Science, vol 3562, Springer, Berlin, pp 405–414
  • Knoblauch A, Markert H, Palm G (2005b) An associative model of cortical language and action understanding. In: Cangelosi A, Bugmann G, Borisyuk R (eds) Modeling language, cognition and action. Proceedings of the 9th neural computation and psychology workshop, Plymouth 2004. World Scientific, pp 79–83

  • Koechlin E, Jubault T (2006) Broca’s area and the hierarchical organization of human behaviour. Neuron 50:963–974 [PubMed]

  • Lashley K (1951) The problem of serial order in behavior. In: Jeffress L (ed) Cerebral mechanisms in behavior. New York, Wiley, pp 112–136
  • Markert H, Palm G (2006) An approach to language understanding and contextual disambiguation in human-robot interaction. In: International workshop on neural-symbolic learning and reasoning (NeSy 2006), pp 23–35

  • Markert H, Knoblauch A, Palm G (2005) Detecting sequences and understanding language with neural associative memories and cell assemblies. In: Wermter S, Palm G, Elshaw M (eds) Biomimetic neural learning for intelligent robots, Lecture Notes in Artificial Intelligence, vol 3575. Springer, Berlin, pp 107–117

  • Markert H, Knoblauch A, Palm G (2007) Modelling of syntactical processing in the cortex. BioSystems 89:300–315 [PubMed]
  • Markert H, Kayikci Z, Palm G (2008) Sentence understanding and learning of new words with large-scale neural networks. In: Prevost L, Marinai S, Schwenker F (eds) Proceedings of artificial neural networks in pattern recognition (ANNPR 2008), LNAI, vol 5064. Springer, Berlin, pp 217–227

  • Mushiake H, Saito N, Sakamoto K, Itoyama Y, Tanji J (2006) Activity in lateral prefrontal cortex reflects multiple steps of future events in action plans. Neuron 50:631–641 [PubMed]

  • Palm G (1982) Neural assemblies: an alternative approach to artificial intelligence. Springer, Berlin

  • Palm G, Sommer FT (1992) Information capacity in recurrent mcculloch-pitts networks with sparsely coded memory states. Network 2:177–186

  • Press W, Teukolsky S, Vetterling W, Flannery B (1993) Numerical recipes in C: the art of scientific computing. Cambridge University Press, Cambridge

  • Pulvermüller F (1992) Constituents of a neurological theory of language. Concepts Neurosci 3:157–200

  • Pulvermüller F (2003) The neuroscience of language: on brain circuits of words and serial order. Cambridge University Press, Cambridge

  • Rehn M, Lansner A (2004) Sequence memory with dynamical synapses. Neurocomputing 58–60:271–278

  • Rizzolatti G, Fogassi L, Gallese V (2002) Motor and cognitive functions of the ventral premotor cortex. Curr Opin Neurobiol 12:149–154 [PubMed]

  • Rushworth M, Walton M, Kennerley S, Bannerman D (2004) Action sets and decisions in the medial frontal cortex. Trends Cogn Sci 8:410–417 [PubMed]

  • Russo E, Namboodiri VM, Treves A, Kropff E (2008) Free association transitions in models of cortical latching dynamics. New J Phys 10:015,008
  • Schemmel J, Meier K, Mueller E (2004) A new VLSI model of neural microcircuits including spike time dependent plasticity. In: Proceedings of IJCNN. IEEE Press, pp 1711–1716

  • Sommer F, Wennekers T (2001) Associative memory in networks of spiking neurons. Neural Netw 14:825–834 [PubMed]

  • Sommer FT, Wennekers T (2005) Synfire chains with conductance-based neurons: internal timing and coordination with timed input. Neurocomputing 65–66:449–454

  • Stoet G, Snyder LH (2003) Executive control and task-switching in monkeys. Neuropsychologia 41:1357–1364 [PubMed]

  • Sun R, Giles L (2000) Sequence learning: paradigms, algorithms, and applications. Springer, Berlin

  • Tsien JZ (2001) Linking Hebb’s coincidence-detection to memory formation. Curr Opin Neurobiol 10:266–273 [PubMed]

  • van Vreeswijk C SH (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274:1724–1726 [PubMed]

  • von Neumann J (1958) The computer and the brain. Yale University Press, New Haven

  • Wennekers T (2000) Dynamics of spatio-temporal patterns in associative networks of spiking neurons. Neurocomputing 32:597–602

  • Wennekers T (2006) Operational cell assemblies as a paradigm for brain-inspired future computing architectures. Neural Inf Process Lett Rev 10:135–145

  • Wennekers T, Ay N (2005) Finite state automata resulting from temporal information maximization and a temporal learning rule. Neural Comput 17:2258–2290 [PubMed]

  • Wennekers T, Palm G (2007) Modelling generic cognitive functions with operational Hebbian cell assemblies. In: Weiss M (ed) Neural network research horizons. Nova Science Publishers, New York, pp 225–294

  • Wennekers T, Garagnani M, Pulvermüller F (2006) Language models based on Hebbian cell assemblies. J Neurosci (Paris) 100:16–30 [PubMed]

  • Wickelgren WA (1979) I liked the postcard you sent Abe and i: context-sensitive coding of syntax and other procedural knowledge. Bull Psychon Soc 13:61–63

  • Wickelgren WA (1992) Webs, cell assemblies, and chunking in neural nets. Concepts Neurosci 3:1–53 [PubMed]

  • Wijekoon J, Dudek P (2008) Compact silicon neuron circuit with spiking and bursting behaviour. Neural Netw 21:524–534 [PubMed]

  • Willshaw D, Buneman O, Longuet-Higgins H (1969) Non-holographic associative memory. Nature 222:960–962 [PubMed]

  • Willwacher G (1982) Storage of a temporal pattern sequence in a network. Biol Cybern 43:115–126 [PubMed]

  • Wörgötter F, Porr B (2005) Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms. Neural Comput 17:245–319 [PubMed]

  • Yamashita Y, Tani J (2008) Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Comput Biol 4:e1000,220 [PMC free article] [PubMed]

Articles from Cognitive Neurodynamics are provided here courtesy of Springer Science+Business Media B.V.